Topics
More on Analytics

What goes into a CFO's dashboard for artificial intelligence and machine learning

Artificial intelligence and machine learning can be leveraged to improve healthcare outcomes and costs -- here's how to monitor AI.

Jeff Lagasse, Editor

The use of artificial intelligence in healthcare is still nascent in some respects. Machine learning shows potential to leverage AI algorithms in a way that can improve clinical quality and even financial performance, but the data picture in healthcare is pretty complex. Crafting an effective AI dashboard can be daunting for the uninitiated.

A balance needs to be struck: Harnessing myriad and complex data sets while keeping your goals, inputs and outputs as simple and focused as possible. It's about more than just having the right software in place. It's about knowing what to do with it, and knowing what to feed into it in order to achieve the desired result.

In other words, you can have the best, most detailed map in the world, but it doesn't matter if you don't have a compass.

AI DASHBOARD MUST HAVES

Jvion Chief Product Officer John Showalter, MD, said the most important thing an AI dashboard can do is drive action. That means simplifying the outputs, so perhaps two of the components involved are AI components, and the rest is information an organization would need to make a decision.

He's also a proponent of color coding or iconography to simplify large amounts of information -- basic measures that allow people to understand the information very quickly.

"And then to get to actionability, you need to integrate data into the workflow, and you should probably have single sign-on activity to reduce the login burden, so you can quickly look up the information when you need it without going through 40 steps."

According to Eldon Richards, chief technology officer at Recondo Technology, there have been a number of breakthroughs in AI over the years, such that machine learning and deep learning are often matching, and sometimes exceeding, human capability for certain tasks.

What that means is that dashboards and related software are able to automate things that, as of a few years ago, weren't feasible with a machine -- things like radiology, or diagnosing certain types of cancer.

"When dealing with AI today, that mostly means machine learning. The data vendor trains the model on your needs to match the data you're going to feed into the system in order to get a correct answer," Richards said. "An example would be if the vendor trained the model on hospitals that are not like my hospital, and payers unlike who I deal with. They could produce very inaccurate numbers. It won't work for me."

A health system would also want to pay close attention to the ways in which AI can fail. The technology can still be a bit fuzzy at times.

"Sometimes it's not going to be 100 percent accurate," said Richards. "Humans wouldn't be either, but it's the way they fail. AI can fail in ways that are more problematic -- for example, if I'm trying to predict cancer, and the algorithm says the patient is clean when they're not, or it might be cancer when it's not. In terms of the dashboard, you want to categorize those types of values on data up front, and track those very closely."

KEY PERFORMANCE INDICATORS FOR AI AND ML

Generally speaking, you want a key performance indicator based around effectiveness. You want a KPI around usage. And you want some kind of KPI that tracks efficiency -- Is this saving us time? Are we getting the most bang for the buck?

The revenue cycle offers a relevant example, where the dashboard can be trained to look at something like denials. KPIs that track the efficiency of denials, and the total denials resolved with a positive outcome, can help health systems determine what percentage of the denials were fixed, and how many they got paid for. This essentially tracks the time, effort, and ultimately the efficacy of the AI.

"You start with your biggest needs," said Showalter. "You talk about sharing outcomes -- what are we all working toward, what can we all agree on?"

"Take falls as an example," Showalter added. "The physician maybe will care about the biggest number of falls, and the revenue cycle guy will care about that and the cost associated with those falls. And maybe the doctors and nurses are less concerned about the costs, but everybody's concerned about the falls, so that becomes your starting point. Everyone's focused on the main outcome, and then the sub-outcomes depend on the role."

It's that focus on specific outcomes that can truly drive the efficacy of AI and machine learning. Dr. Clemens Suter-Crazzolara, vice president of product management for health and precision medicine at SAP, said it's helpful to parse data into what he called limited-scope "chunks" -- distinct processes a provider would like to tackle with the help of artificial intelligence.

Say a hospital's focus is preventing antibiotic resistance. "What you then start doing," said Suter-Crazzolara, "is you say, 'I have these patients in the hospital. Let's say there's a small-scale epidemic. Can I start collecting that data and put that in an AI methodology to make a prediction for the future?' And then you determine, 'What is my KPI to measure this?'

"By working on a very distinct scenario, you then have to put in the KPIs," he said.

PeriGen CEO Matthew Sappern said a good litmus test for whether a health system is integrating AI an an effective way is whether it can be proven that its outcomes are as good as those of an expert. Studies that show the system can generate the same answers as a panel of experts can go a long way toward helping adoption.

The reason that's so important, he said, is that the accuracy of the tools can be all over the place. The engine is only as good as the data you put into it, and the more data, the better. That's where electronic health records have been a boon; they've generated a huge amount of data.

Even then, though, there can be inconsistencies, and so some kind of human touch is always needed.

"At any given time, something is going on," said Sappern. "To assume people are going to document in 30-second increments is kind of crazy. So a lot of times nurses and doctors go back and try to recreate what's on the charts as best they can.

"The problem is that when you go back and do chart reviews, you see things that are impossible. As you curate this data, you really need to have an expert. You need one or two very well-seasoned physicians or radiologists to look for these things that are obviously not possible. You'd be surprised at the amount of unlikely information that exists in EMRs these days."

Having the right team in place is essential, all the more so because of one of the big misunderstandings around AI: That you can simply dump a bunch of data into a dashboard, press a button, and come back later to see all of its findings. In reality, data curation is painstaking work.

"Machine learning is really well suited to specific challenges," said Sappern. "It's got great pattern recognition, but as you are trying to perform tasks that require a lot of reasoning or a lot of empathy, currently AI is not really great at that.

"Whenever we walk into a clinical setting, a nurse or a number of nurses will raise their hands and say, 'Are you telling me this machine can predict the risk of stroke better than I can?' And the immediate answer is absolutely not. Every single second the patient is in bed, we will persistently look out for those patterns."

Another area in which a human touch is needed is in the area of radiological image interpretation. The holy grail, said Suter-Crazzolara, would be to have a supercomputer into which one could feed an x-ray from a cancer patient, and which would then identify the type of cancer present and what the next steps should be.

"The trouble is," said Suter-Crazzolara, "there's often a lack of annotated data. You need training sets with thousands of prostate cancer types on these images. The doctor has to sit down with the images and identify exactly what the tumors look like in those pictures. That is very, very hard to achieve.

"Once you have that well-defined, then you can use machine learning and create an algorithm that can do the work. You have to be very, very secure in the experimental setup."

HOW TO TELL IF THE DASHBOARD IS WORKING

It's possible for machine learning to continue to learn the more an organization uses the system, said Richards. Typically, the AI dashboard would provide an answer back to the user, and the user would note anything that's not quite accurate and correct it, which provides feedback for the software to improve going forward. Richards recommends a dashboard that shows failure rate trends; if it's doing its job, the failure rate should improve over time.

"AI is a means to an end," he said. "Stepping back a little bit, if I'm designing a dashboard I might also map out what functions I would apply AI to, and what the coverage looks like. Maybe a heat map showing how I'm doing in cost per transaction."

Suter-Crazzolara sees these dashboards as the key to creating an intelligent enterprise because it allows providers to innovate and look at data in new ways, which can aid everything from the diagnosis of dementia to detecting fraud and cutting down on supply chain waste.

"AI is at a stage that is very opportune," he said, "because artificial intelligence and machine learning have been around for a long time, but at the moment we are in this era of big data, so every patient is associated with a huge amount of data. We can unlock this big data much better than in the past because we can create a digital platform that makes it possible to connect and unlock the data, and collaborate on the data. At the moment, you can build very exciting algorithms on top of the data to make sense of that information."

MARKETPLACE

If a health system decides to tap a vendor to handle its AI and machine learning needs, there are certain things to keep in mind. Typically, vendors will already have models created from certain data sets, which allows the software to perform a function that was learned from that data. If a vendor trained a model with a hospital whose characteristic differ from your own, there can be big differences in the efficacy of those models.

Richards suggested reviewing what data the vendor used to train its model, and to discuss with them how much data they need in order to construct a model with the utmost accuracy. He suggests talking to vendor to understand how well they know your particular space.

"In most cases I think they've got a good handle on the technology itself, but they need to know the space and the nuances of it," said Richards. He would interview them to make sure he was comfortable with their depth of knowledge.

That will ensure the technology works as effectively as possible -- an important consideration, since AI likely isn't going away anytime soon.

"We're seeing not just the hype, but we're definitely seeing some valuable results coming," said Richards. "We're still somewhat at the beginning of that. Breakthroughs in the space are happening every day." 

Focus on Artificial Intelligence

In November, we take a deep dive into AI and machine learning.

Twitter: @JELagasse

Email the writer: jeff.lagasse@himssmedia.com