2024 will see generative AI turn from talk to implementation
If AI makes a wrong diagnosis or if someone is seriously hurt, there's no law determining who's at fault, says Patrick Bangert.
Photo: Laurence Dutton/Getty Images
In 2023, AI became the talk of the industry. In 2024, implementation is expected to soar.
"The year 2023 didn't really see the adoption of generative AI. We saw a lot of talking," said Patrick Bangert, senior vice president of data, analytics and AI at Searce, an AI consultant. "Now we're in a position in the year 2024 where we can use genAI to do something."
Leading the change this past year has been ChatGPT, the chatbot launched by OpenAI in November 2022. Based on a large language model, it simplifies the use of an application program interface so anyone can use it.
Large language models using AI have been around for about 10 years, Bangert said. The real innovation is in having a chat-like interface that is allowing a lot of people to try it out.
LLMs are widely in use in healthcare to simplify physician note-taking and for other back-office efficiency in billing, coding and invoicing.
The clinical adoption of AI will be slower due to risk, coverage and liability, according to Bangert.
AI's potential to make a mistake in care, even if that risk is small, is acceptable in countries that look at the greater good of populations, but not in the United States, where one individual dying from an AI mistake means a great deal, according to Bangert.
Also, people will tolerate mistakes made by humans, but not mistakes from machines.
"Humanity has different requirements from human decision-makers and machine decision-makers," Bangert said.
Regulations and laws are needed to give teeth to President Biden's Executive Order on AI. The order, released in October, directs the Department of Health and Human Services to have a mechanism in place to collect reports of "harms or unsafe healthcare practices" in AI and to act to remedy them.
"The Biden executive order is window dressing, because it's vague and there are no penalties," Bangert said.
Physicians need federal regulations and protection from AI liability. If AI makes a wrong diagnosis or if someone is seriously hurt, there's no law determining who is at fault, Bangert said. Malpractice doesn't apply.
"They're basically operating in a lawless territory," he said. "My personal opinion is it really belongs to the government, the lawmaker, because, if physicians make a diagnosis error, they are liable. Their career is over."
While insurers are not yet restricting coverage for AI-related risks, according to Business Insurance, some are asking healthcare providers about their use of AI and how it may be applied in a clinical setting.
AI is expected to benefit health insurers by helping to identify medical issues sooner, which helps control rising healthcare costs, according to Moody's Investors Service.
But AI development is outpacing the ability of some providers and payers to integrate it in the clinical setting, according to Benefits Pro. Data needs to be collected, analyzed and shared, and become part of the normal workflow.
"AI will democratize data analytics by allowing many more people, beyond data scientists, to extract information from data and generate actionable conclusions based on it," Benefits Pro said, citing the Moody's report. "Ultimately, this could put better, more tailored information in the hands of healthcare providers, allowing them to deliver more precise diagnoses and treatments."
Until such a time of democratization, AI for medical use will first be for those patients who can afford it, according to Bangert. For instance, a second opinion on a cancer diagnosis using AI can be available within an hour, compared to weeks to get an appointment and a response from another physician.
"It's expensive to get a second opinion," Bangert said. "It's a $2 billion industry."
Twitter: @SusanJMorse
Email the writer: SMorse@himss.org