Electronic health records don't necessarily bolster quality of care in ambulatory settings, reported a study by Stanford University researchers. This isn't the first study that's found EHRs disappointing in improving care or cutting costs. But isn't the success of EHRs -- like big applications in other industries -- closely dependent on how these systems are being used, the training users receive, and how robust and easy the products are to use?
For their study published Jan. 24, Stanford associate professor of medicine Dr. Randall Stafford and researcher Max Romano analyzed physician data on 250,402 outpatient visits at doctor offices and hospitals from 2005 to 2007. About 30% of the visits involved EHRs and 17% also involved clinical decision-support tools. But EHR and clinical decision support (CDS) only resulted in better care for one of 20 quality indicators assessed by the researchers, compared with non-EHR-related visits. (The better care came to high-risk patients who received diet counseling thanks to electronic triggers to clinicians.)
"Our findings indicate no consistent association between EHRs and CDS and better quality," concluded the authors of the study. "These results raise concerns about the ability of health information technology to fundamentally alter outpatient care quality."
The sobering findings about EHRs by the Stanford researchers are similar to what's been discovered in a few other studies in recent years. For instance, in late 2009, Harvard Medical School researchers also concluded that, based on reviewing data over five years from 4,000 hospitals that had implemented some type of EHR, those organizations had not shown lowered costs or improved care compared to hospitals that used paper-based systems.
With $27 billion in federal incentives beginning to flow this year to healthcare providers who implement EHRs and other health IT systems, these studies are sure to (again) stir up outrage about wasteful government spending. But I'm not convinced that future studies examining EHR success rates will end up as disappointing.
For one, the studies conducted so far looked at EHR implementations that were put in place several years ago, prior to any widespread movement for standardized, meaningful use of health IT. The federal government's laundry lists of meaningful use requirements are nit-picky for many reasons.
Whether or not the government should even venture into the business of mandating how private organizations should use (let alone purchase) computer systems is one debate. However, with the government being the largest payer of health claims through Medicare and Medicaid -- and agreeing to subsidize the purchase of these electronic systems -- Uncle Sam setting some guidelines for their use makes sense. This is especially the case if you expect to be able to thoroughly track the results later on whether widespread, consistent use of EHRs really does improve care and process efficiencies or reduce costs.