Can Data-Powered Comparative Effectiveness Research Save Healthcare?

Mounting evidence suggests CER will deliver new, cost-effective treatment options. But at least one controversial problem needs to be resolved first.

Paul Cerrato, Contributor

June 26, 2013

5 Min Read

With so much emphasis from government and private insurers on the need to lower the cost of medical care, comparative effectiveness research (CER) has come into its own. CER aims to compare two or more existing treatment regimens to determine which are most cost-effective. Since so many sophisticated software tools are now available to help facilitate such research, healthcare IT executives need to stay well-informed about the strengths and limitations of CER.

In the past, I've written about Clinical Query, a searchable patient data repository being used by Boston's Beth Israel Deaconess Medical Center to facilitate CER. Last year the database was launched to allow researchers and clinicians to look for potential connections between diseases, treatment options and risk factors, which in turn can become the jumping off point for a research project.

If a Harvard researcher wants to compare the benefits of diuretics to ACE inhibitors among patients with hypertension, for instance, he or she can use Clinical Query to look at the records of more than 2 million patients and 200 million data points, including diagnoses, medications taken, lab values, and radiology images.

[ Technology can help improve healthcare, but it's not a cure-all. Read IT Can't Fix Complex Healthcare Problems. ]

A comparison of data on the two classes of high blood pressure meds might reveal that one is more effective than the other. And while the results of that CER analysis may not carry the same weight as a randomized clinical trial in which groups of patients are actually given the drugs in real time to see which was more effective, the CER results can still guide clinicians on treatment options for their patients.

A CER Network Could Transform Medicine

During a recent conversation, John Halamka, MD, CIO at Beth Israel Deaconess, pointed out that Clinical Query is just the beginning of much more ambitious attempt to aggregate not only the 2 million patient records in their system but the tens of millions of records from major healthcare systems nationwide.

"For comparative effectiveness research, you may need 10 million, 20 million patients," Halamka said. "So wouldn't it be much better if you had a CER network, where Stanford, UCLA, Harvard and Mayo Clinic all decided to share [de-identified] patient data?" Grants from the Patient-Centered Outcomes Institute (PCORI), a federally sponsored agency, are going out to various organizations to turn this proposed network into a reality.

In April, PCORI laid out its grand vision of creating a National Patient-Centered Clinical Research Network to help improve CER. At the same time, it announced a funding program to support the network.

PCORI's vision has huge potential for improving clinical practice. One of the current shortcomings of clinical research is that so much of it is limited by the small number of patients enrolled in each study. In fact, several potentially valuable treatment options have been discarded because investigators were not able to detect a statistically significant difference between options A and B. Many of these investigations were guilty of what's referred to a Type II error, in which a treatment regimen is deemed useless simply because the number of patients being evaluated was too small to spot a therapeutic effect.

More than 25 years ago, a critique found 71 "negative" studies published in respected medical journals had prematurely condemned potentially valuable treatments because too few subjects had been included to correctly conclude the treatment was useless. Decades later, a second analysis revealed researchers were making the same mistake. A JAMA review found 383 randomized controlled trials (RCTs) were not large enough to detect a 25% to 50% difference between an experimental and control group. Studies that take advantage of a network that includes millions of patients are far less likely to fall into that trap.

Massive Databases Don't Guarantee Success

A massive network of EMR-derived clinical data would be invaluable, but large numbers aren't enough. A database like this can serve as the starting point for a powerful observational study that could reveal, for example, that 10,000 patients taking penicillin for strep throat fared better than an equivalent number of patients taking a more expensive antibiotic. But such correlations don't establish a cause and effect relationship. Randomized controlled trials are much better at that.

The other danger in putting too much faith in large CER studies that rely on EMR data is summed up by Tomas Philipson of the University of Chicago and Eric Sun of Stanford University. Their report, Blue Pill or Red Pill: The Limitations of Comparative Effectiveness Research, acknowledges that CER "measures the effects of different drugs or other treatments on a population, with the goal of finding out which ones produce the greatest benefits for the most patients." It then quotes President Obama's comment: "If there's broad agreement … [that] the blue pill works better than the red pill… and it turns out the blue pills are half as expensive as the red pill, then we want to make sure that doctors and patients have that information available to them."

The report goes on to explain that a 2005 CER analysis found that there was little difference in the effectiveness of older, less-expensive antipsychotic drugs compared to more expensive second-generation agents. The 2005 analysis concluded that only paying for the cheaper medications would save $1.2 billion. But the CER analysis had a fatal flaw: It looked only at the effects of the two groups of drugs on an average patient. As the Philipson and Sun critique points out: "…individuals differ from one another and from population averages. Therefore, what may be on average a 'winning' therapy may simply not work for a large number of patients. Conversely, a drug that is less effective on average may still be the best, or only, choice for a sizable proportion of patients."

Philipson and Sun conclude that paying only for the cheaper drugs would have resulted in "worse mental health for many thousands of people, resulting in higher costs to society that would equal or outweigh any savings in Medicaid costs."

The data that electronic health systems are creating will have a profound effect in shaping healthcare reform. Using that data well will depend on a deeper understanding of CER's strengths and weaknesses.

Read more about:

2013

About the Author(s)

Paul Cerrato

Contributor

Paul Cerrato has worked as a healthcare editor and writer for 30 years, including for InformationWeek Healthcare, Contemporary OBGYN, RN magazine and Advancing OBGYN, published by the Yale University School of Medicine. He has been extensively published in business and medical literature, including Business and Health and the Journal of the American Medical Association. He has also lectured at Columbia University's College of Physicians and Surgeons and Westchester Medical Center.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights