GenAI: Risk vs. reward causing adoption to stall
AI is not yet 100% accurate, with a tradeoff between risk and productivity for financial savings.
Photo: Courtesy of Searce
Risk is one reason hospitals in the United States do not yet have large-scale adoption of generative AI, according to Patrick Bangert, senior vice president of Data, Analytics and AI at Searce, a cloud solutions and technology services provider.
Bangert believes the industry in this country will see large-scale adoption in about three to five years, a move that is already happening in other countries.
"In fact we are seeing large-scale adoption in the Far East," Bangert said. "In Japan, China, they are adopting these technologies at a far faster rate."
This is because there are fewer governmental regulations and a greater tolerance for risk.
For instance, there's a minute risk someone will hack into a system's electronic health record. The risk is small, and Bangert believes this will eventually be lowered to virtually zero.
The use of generative AI for diagnostics is close to 99% accurate, he said. While this sounds good, to doctors this means 1 out of 100 patients will be wrongly diagnosed, he said. Nothing but 100% compliance ensures safety.
The data needs to be cleaned, and security measures need to be in place to ensure the system doesn't get hacked.
"It is good to roll these systems out to hospitals if they do it right," Bangert said. "Each problem has a solution if they roll it out with the right kind of governance and security.
"The general punchline is AI is never 100% accurate," Bangert said. "We will have failures. The root question is how to deal with it."
WHY THIS MATTERS
The tradeoff with AI is between risk and productivity for financial savings.
AI is currently widely used for automated note taking. Large language models have elevated the quality of a conversion into a written document.
Since the average physician spends two hours in a 10-hour day doing clerical work, there's a productivity gain of 20% from this technology alone, according to Bangert.
Hospitals want generative AI as a profit lifting business that leads to more productivity and more money per hour. However, they are afraid of GenAI being oversold as the next best thing since sliced bread.
"When everyone says AI is hyped, yes it is," Bangert said. "It is capable of less than what the marketing people tell you."
Former banker Manny Krakaris, CEO of Augmedix, knows the numbers on how AI can impact ROI.
For purely routine administrative functions, AI can be cost-effective, he said. The industry has been doing this for a long time with bots that can answer questions.
Generative AI's large language models create a summary of text that takes the brunt work away from physicians.
But hospitals and physicians don't need AI to create a medical note based on a conversation, Krakaris said. Having a scribe in the exam room to type notes costs about $28,000 to $40,000 a year, he said. The industry still employs about 13,000 scribes.
A fully automated but not a do-it-all solution costs about $600 a month, or $7,200 a year, representing a huge savings from hiring a scribe.
On the high end, an estimated 10% of physicians and providers pay $30,000 per doctor for the full virtual service for documentation.
The biggest tier is the lower end, which is dominated by dictation tools, Krakaris said. About half a million physicians are in this segment, paying $75 to $150 a month, or $900 to $1,800 a year. Most of the burden here falls on the doctor.
ROI is realized when hospitals can capture everything that is reimbursable that happened during the patient encounter, said Krakaris, whose company offers this solution. Augmedix recently collaborated with HCA Healthcare and Google Cloud on its new Augmedix Go ambient AI technology.
THE LARGER TREND
Bangert, a mathematician by training, said 25 years ago AI was considered a failed technology. In the 1990s he worked applying AI to satellite technology for the military.
In a recent announcement, Oracle Cerner added generative AI to its EHR platforms and launched a Clinical Digital Assistant to help providers use GenAI and voice commands to reduce manual work and documentation burden.
Arcadia, a data analytics platform, also recently announced the launch of a generative AI assistant at the point of care, offering a menu of turnkey use cases to deliver insights within existing care team workflows.
The Government Accountability Office recently published a report on AI's use and growth.
At its best, AI can improve medical diagnosis, identify potential national security threats more quickly and solve crimes. But there are also significant concerns – in areas including education, intellectual property and privacy, the GAO said.
The GAO has looked at three major areas of AI advancement: generative AI systems that can create text (apps like ChatGPT and Bard, for example) as well as images, audio, video and other content when prompted by a user; machine learning in fields that require advanced imagery analysis, from medical diagnostics to military intelligence; and facial recognition AI technology.
The GAO has developed an AI Accountability Framework to help Congress address the complexities, risks and societal consequences of emerging AI technologies. It is built around the four principles of governance, data, performance and monitoring.
ON THE RECORD
"AI technologies have enormous potential for good, but much of their power comes from their ability to outperform human abilities and comprehension," the GAO said. "From commercial products to strategic competition among world powers, AI is poised to have a dramatic influence on both daily life and global events. This makes accountability critical to its application, and the framework can be employed to ensure that humans run the system – not the other way around."
Twitter: @SusanJMorse
Email the writer: SMorse@himss.org