Hospitals Struggle With EHRs For Quality Reporting, AHA Says

AHA report details problems hospitals encountered with EHRs in MU stage 1, recommends policy changes to address them.

Ken Terry, Contributor

July 26, 2013

4 Min Read

9 Mobile EHRs Compete For Doctors' Attention

9 Mobile EHRs Compete For Doctors' Attention


9 Mobile EHRs Compete For Doctors' Attention (click image for larger view and for slideshow)

Hospitals encountered major problems collecting quality data electronically to meet the requirements of Meaningful Use stage 1, according to a report prepared for the American Hospital Association (AHA). Unless there are significant policy changes, the report warned, those difficulties will continue in MU stage 2.

This is the not the first time the AHA has complained about the quality reporting criteria for Meaningful Use. Commenting on the proposed rule for MU stage 2 in April 2012, the AHA told the Centers for Medicare and Medicaid Services (CMS) that hospitals had had "significant difficulty" using EHRs to do quality reporting. But this report goes into much greater detail about what those difficulties were and shows that even institutions with considerable EHR experience had big problems.

Researchers interviewed executives and operational personnel from four hospitals to gather information for the report. They included large and small, urban and non-metropolitan facilities that had implemented their EHRs five to 10 years earlier.

[ What's the real motive behind EHR adoption? Read High EHR Usage: Driven By Need Or Regulations? ]

The surveyed hospitals had high hopes for electronic quality reporting. In federal programs alone, they were required to report on 90 different measures, and the ability to generate quality data automatically out of clinical workflow promised to decrease that burden. But such was not to be.

"Based on the experiences of the hospitals in this case study, the current approach to automated quality reporting does not yet deliver on the promise of feasibility, validity and reliability of measures or the reduction in reporting burden placed on hospitals," the report said.

In Meaningful Use stage 1, hospitals were required to report on 15 electronic clinical quality measures (eCQMs) that together encompassed more than 180 data elements. The known challenges going into the project included:

-- The modification of existing measures without robust testing to determine if all the necessary data were available in the EHRs,

-- Known errors in the eCQMs as specified by CMS

-- Lack of a mature e-specification development and updating process.

As they began their implementations, the hospitals discovered that much of the required data had not been captured in the required format. To capture the data so that the quality measures could be automatically populated, the facilities' IT departments and vendors had to modify the EHRs, in some cases adding new structured fields. "The inflexibility of the eCQM reporting tools" supplied by EHR vendors was partially responsible for this onerous task, the researchers said.

Organizations with integrated systems that used the same database across departments had less trouble than those in which there was little or no interoperability across departmental systems. In the latter case, staff had to manually enter data from other systems into the system being used to extract data for the eCQMs.

Across all the institutions, 80% of the effort "entailed changes to hospital workflow solely to capture eCQM data," the report noted. This didn't contribute to patient care and diverted efforts from other important hospital initiatives.

Despite concerted efforts, none of the organizations were able to validate their results fully. Some hospitals were able to perform technical validations to verify that all the data required by the eCQM reporting tool could be captured in a discrete format in the EHR. But they were unable to verify the extent to which clinicians had entered the discrete data used by the eCQM reporting tools.

Three of the organizations used a "staff-intensive and unsustainable concurrent review process to encourage documentation directly by nurses or order-entry by physicians," the report said. "The accuracy of the data used for eCQM calculation is dependent on staff review of entries in the EHR and manual data input … Organizations either spent considerable time in re-work to revise and validate the eCQM measurement process … or chose to ignore the results in favor of those derived from the chart-abstracted versions of the measures."

The report made several policy recommendations:

-- Slow the pace of the transition to electronic quality reporting with fewer but better testing measures, starting in MU stage 2.

-- Make EHRs and eCQM reporting tools more flexible so data capture can be aligned with workflow and interoperable so that data can be shared across departmental systems.

-- Improve EHRs and reporting tools to address usability and data management issues.

-- Test eCQMs for reliability and validity before adopting them in national programs.

-- Provide clear guidance and tested tools to support successful hospital transition to increase electronic quality reporting requirements. Among other things, the report suggested, the government should create a reliable, validated crosswalk from SNOMED-CT, the language used in the eCQMs, to the new ICD-10 diagnostic codes.

The release of the report coincides with AHA's statement to the Senate Finance Committee urging the Department of Health and Human Services to extend the timelines for Meaningful Use stage 2.

Read more about:

20132013

About the Author(s)

Ken Terry

Contributor

Ken Terry is a freelance healthcare writer, specializing in health IT. A former technology editor of Medical Economics Magazine, he is also the author of the book Rx For Healthcare Reform.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights