Kaiser Permanente Automates Quality Reporting To Joint Commission
Converting nearly half of core measures to e-measures saves time, but there are limits to what EHRs can do, cautions study.
9 Mobile EHRs Compete For Doctors'
(click image for larger view and for slideshow)
Researchers at Kaiser Permanente have shown that it's possible to automate partially or fully the collection of data from an EHR for public quality reporting. They've also proved that this automation saves money, compared to manual data abstraction. However, their paper in the Journal of the American Medical Informatics Association (JAMIA) cautions that their experience "illustrates the gap between the current and desired states of automated quality reporting."
In 2010, Kaiser Permanente's care reporting staff began to retool the Joint Commission core measures for automated quality reporting. The purpose of this program was to make reporting by Kaiser's 37 hospitals more efficient and more reliable, said Terhilda Garrido, Kaiser's VP for health IT transformation and analytics, and the paper's lead author, in an interview with InformationWeek Healthcare.
Kaiser had previously developed e-measures from scratch for quality improvement purposes but had never before tried to adapt existing quality measures to the EHR. The first batch of metrics it automated were 21 measures from six of the 13 core measure sets, including those for acute myocardial infarction, ED patient flow, immunizations, the surgical care improvement project (SCIP), pneumonia and VTE prophylaxis.
The results were encouraging. Comparing the time required to abstract each of 20 randomly selected cases with the time that the automated method took, including validation of the data, the researchers found that automation saved between five and 14 minutes per case, depending on the measure set. In reporting on the SCIP, partial automation reduced abstraction time by half.
This increase in efficiency could be valuable to many healthcare organizations as the number of quality measures they are required to report on continues to grow. As the paper notes, partial automation "saves time over a completely manual process, expanding the capacity of existing abstraction staff and allowing us to forego hiring additional abstractors despite an expanding number of quality measures."
Buoyed by this early success, Garrido said, Kaiser is continuing the conversion of core measures into electronic metrics. Its care reporting department is now looking at perinatal and heart failure measures and expanding those in the AMI e-measure set. "I believe HEDIS is on the horizon," she added, referring to the Healthcare Effectiveness and Data Information Set used to measure the quality performance of health plans.
Nevertheless, she cautioned, Kaiser encountered significant difficulties on its journey to automated quality reporting and sees some inherent limits in what it can get out of its EHR. To begin, only some of the requisite data was in discrete fields. The percentage of required data elements that were discrete and could be mapped to the 21 e-measures ranged from 43% to 100%, averaging 61%. The rest comes from administrative claims data and manual abstraction.
The paper cites another study showing that EHR fields captured only 35% of the required data elements for Meaningful Use reporting; that percentage rose to 65% after including the abstraction of free-text physician notes and medication administration records.
The limits of data capture are partly related to the measure: for example, a question about smoking cessation advice is more likely to be documented in a physician note than in a drop-down box. In addition, there are limits to how much discrete documentation doctors and nurses can be expected to do as they try to provide good patient care, the JAMIA study noted. So if the quality measures are meaningful, EHRs, which were not designed for quality improvement, can generate only some of the requisite data without manual abstraction.
Natural language processing, which uses speech recognition to parse dictation and place data in the right fields, could someday increase the automation of quality reporting. But it has much further to go before it reaches the 100% accuracy level that is necessary, Garrido said.
Other large organizations that have the right kind of programmers could adapt core measures to e-measures as Kaiser has done, she said. But EHR vendors won't provide much help, because the IT staff needs a deep knowledge of clinical workflow in each organization to locate the data elements they need in the EHR output and map them to the e-measures.
The Centers for Medicare and Medicaid Services (CMS) is phasing in requirements for electronic reporting of quality measures for Meaningful Use and the Physicians Quality Reporting System (PQRS). Kaiser welcomes the kind of transparency that that could bring, Garrido said. However, she implored policy makers to make sure that all the e-measures are field tested before they're implemented at great cost to providers.
"There needs to be a level of practicality," she said. "Policy makers should require testing of those PQRS e-measures before mandating them for everyone."