NY hospital exec: Multimodal LLM assistants will create a “paradigm shift” in patient care

fiverr
NY hospital exec: Multimodal LLM assistants will create a “paradigm shift” in patient care
Bitbuy


Join leaders in Boston on March 27 for an exclusive night of networking, insights, and conversation. Request an invite here.

Using multimodal large language models (LLMs), hospital systems can create powerful virtual doctors assistants to track and diagnose everything proactively for patients, said a medical director at one of New York’s leading hospital systems, NewYork-Presbyterian (NYP).

Dr. Ashley Beecy, medical director of AI operations at NYP, spoke at VentureBeat’s AI Impact Tour event in New York last Friday. She said her hospital system is already experimenting with generative AI in several discrete areas that provide value but are minimal risk, such as summarizing conversations from patient visits. But she hopes enthusiasm around generative AI will provoke workflow change to allow hospitals to build powerful and all-encompassing assistants which “will change the paradigm for which I practice.”

Multimodal LLM technology can provide all-encompassing, proactive care

Beecy, who is also a practicing cardiologist at the hospital, did not put a timeframe on when this would be possible but mentioned it as something she’d like to see progress toward over the next year. She said patients get referred to her after they experience chest pain. But she said she’d prefer to know if her patient was going to have a heart attack before it happened. “And so we can use this technology and all of the data we’re collecting about the patient, to get insights from a multimodality perspective – insights from things like imaging, echocardiograms, and electrocardiograms, that maybe me as a human can’t see, but AI can and allow me to act on it… before events happen.”

bybit

She said much of the technical capability for this is in place, but it’s a matter of adjusting internal workflows and processes to allow this to happen, or what she called “change management.” This will take a lot of work and testing, she acknowledged, and also require the sharing of ideas by national health organizations, since it will require larger structural change beyond her own hospital. She sees a path where the hospital system first tackles low-risk administrative use cases for generative AI, such as summarizing verbal conversations from patient visits. The system will then tackle clinical diagnostics with generative AI, for example, ways to better detect heart disease in individual cases. Only then can it bring all of these elements together in the more ambitious step she envisions.

VB Event

The AI Impact Tour – Boston

We’re excited for the next stop on the AI Impact Tour in Boston on March 27th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on best practices for data integrity in 2024 and beyond. Space is limited, so request an invite today.

Request an invite

“What I’d love to see is a colleague – a model that can encompass all of these at once, and I can say when’s my next patient going to be here, how long should that patient be scheduled for based on the time it has taken in the past to see them, and what is the summary of all the visits they’ve had since I’ve last seen them so that I can interpret that, and do they need refills, and can you automatically populate that into the electronic record for me so that I can order it – all these tasks encompassed together so it becomes ubiquitous in our workflow.” (See her full comments in the video below).

Beecy said that NYP employees seemed generally embracing of generative AI so far, and eager to participate in its use. NYP, which is affiliated with two medical schools at Cornell and Columbia University, has around 49,000 employees and affiliated physicians.

Workflows and processes still need to be worked out

Beecy, in a conversation moderated by senior AI writer Sharon Goldman, said the hospital is aligning AI’s abilities – things like pattern recognition, summarization, data extraction and content generation – to the most important, high-value applications that are also low risk. One of her personal favorites, she said, is reducing the administrative burden on doctors by recording patient visits, so that the visit conversation can be transcribed into a note during the visit. Doctors have become what she calls “ambient scribes,” working behind a computer and only occasionally looking at you. They can then spend hours manually transcribing notes in the evenings. “We have to change that,” she said. She said the hospital will need patients to give consent for recording visits, because transparency is essential. But the results would be significant because it takes away the task of having to create content, and instead lets the doctor validate and edit.

Moving on from applications around administrative tasks like this, she said using AI for clinical diagnostic applications is more challenging. But one use case NYP is evaluating is using electrocardiograms to detect whether you have structural heart disease.  This is when you have problems in your heart valves or muscles, and it’s usually diagnosed by an ultrasound of the heart. Not everyone gets such an ultrasound, but a lot of people get an electrocardiogram, which is a snapshot of your heart’s electrical activity, which can also detect heart disease using AI. “You can screen people and get them the care they need earlier,” she said.

Risks are plentiful, but excitement around AI abounds

When asked if she had any concerns around the risk of generative AI making mistakes in these applications, she said there was “a lot to unpack” around the issues of risk, but that as long as the doctor reviews the summaries of visits and diagnoses, most risk can be avoided. She said the technology is “not at 50 percent, because we wouldn’t use it. It’s not at 100 percent, because it would replace all of our jobs. It’s probably at 90 percent, which is why I say the provider would review it at the end.”

Another risk is overreliance on the technology, she said. LLM technology has improved so much – Beecy cited the progress of ChatGPT as it moved to GPT 4 from GPT 3.5 – that humans may start to get too complacent, and having a human in the loop may lose its value, she said.

NYP is taking a conservative, measured approach to the technology, Beecy said, by making sure it aligns two main stakeholders – those who want to roll out generative AI tools, and those who are going to use them. There’s some concern by employees with how it is integrated into workflows and what it means for them, she said, but added: “I would say there’s a lot of excitement… right now we have people who are eager to pilot.”

Generative AI is proving to be a democratizing force

In the past, Beecy said, technology tended to be pushed down to employees from the top. But this is the first time where technology is really democratized, in that NYP’s doctors and other providers have access to ChatGPT, she said.  “They can use AI, they can communicate with it, and it’s actually allowing them to come up with use cases they find valuable.” That use cases are coming from end users themselves and not from the top down is helping with engagement and change management, she said.

She said the hospital is reaching out to survey patient groups to understand just how transparent NYP needs to be with the technology. Questions include whether a patient wants to know every time the technology is being used. Beecy said these are complicated questions and require a multidisciplinary team, perhaps even including sociologists and bioethicists at the table. 

Sarah Bird, global lead for responsible AI engineering for Microsoft, spoke in a session following Beecy’s, in a conversation that was moderated by Sharon Goldman and myself.  We asked her whether Beecy’s vision for an ambitious all-encompassing doctor’s assistant would be possible anytime soon, given where Microsoft’s AI technology stands. (Microsoft works closely with OpenAI to provide generative AI to enterprise companies.) 

Bird suggested the technology can provide the building blocks needed for such an assistant, for example breaking down a flow into particular tasks, and grounding technology with access to reliable information. But she said one concern with generative AI summaries is that the technology can add information that may not be correct, or omit information. Omitting a symptom from a summary of a doctor’s visit may totally change the meaning of the diagnosis, she said. “We have been experimenting with techniques where we give the model a deeper understanding of medical information so that it actually summarizes effectively.”

Full disclosure: Microsoft sponsored this New York event stop of VentureBeat’s AI Impact Tour, but the speakers from NewYork-Presbyterian and Citi were independently selected by VentureBeat. Check out our next stops on the AI Impact Tour, including how to apply for an invite for the next events in Boston on March 27 and Atlanta on April 10.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.



Source link

Paxful

Be the first to comment

Leave a Reply

Your email address will not be published.


*