New methods of assessing drug benefits can help cut healthcare costs
There may be multiple drugs on the market that have already been shown to be better than the standard, and drugs may be approved without comparison.
With the cost of drugs a critical issue in healthcare, health insurance companies and government payers need to understand how new and existing drugs compare in terms of benefits and risks.
But there's a problem. When drugs are first approved, they have typically been compared in clinical trials to either a placebo or to one standard of care, which is an established treatment that has previously been widely accepted. Yet there may be multiple drugs on the market that have already been shown to be better than the standard, and in diseases with high unmet needs, drugs may even be approved without any comparisons.
"This," said David Cheng, a postdoctoral researcher at Harvard's T.H. Chan School of Public Health, "limits our ability to compare the effectiveness of new drugs to all the other available treatment options that are out there."
That, in turn, is essentially a waste of money, as ineffective drugs are often on the receiving end of investment dollars and more effective drugs suffer for lack of funding.
To get around this problem, people often engage in "a kind of naïve comparison," said Cheng in an analysis for The American Statistical Association.
"They'd look, say, at the rates of survival for a cancer drug by a given time in one study and then compare them to another, even though the two studies would not be directly comparable," he said. "The patients might have more late-stage disease in one study and more early-stage disease in the other, or some other significant difference in patient characteristics, and this wouldn't be taken into account in the analysis. You'd end up with massive confounding."
Dealing with such confounding bias is especially challenging as analysts and researchers often only have access to the full individual patient-level data for the new drug, and have to rely on data summaries from academic publications for existing drugs on the market.
To overcome the problem, analysts and researchers have turned to a method called matching-adjusted indirect comparison, or MAIC.
"If you have access to the individual-level data from one drug trial," said Cheng, "then you could reweight the observations or adjust the final analysis so that the patient characteristics match the summaries of another trial."
Despite the increasing use of MAIC to inform drug reimbursement decisions, the statistical performance of MAIC has not been extensively studied or reported. The research conducted by Cheng and clleagues is the first to identify conditions under which MAIC is valid.
If applied correctly, MAIC can provide unbiased estimates of a treatment's effect when patient populations between trials are sufficiently similar, and when the probability that an individual is selected into one trial versus another can be adequately modeled. It also compares the potential for bias through simulations to some other common approaches to such comparisons across studies.
"This work can help decision-makers understand when MAIC results are reliable and when there are challenges in the data that would produce unreliable results," said Cheng. "This could, in turn, enable better decision-making and ultimately inform smarter allocation of resources to drugs that work best."
Twitter: @JELagasse
Email the writer: jeff.lagasse@himssmedia.com