Hospital groups, Joint Commission president say use of claims data invalidates comparisons published on HospitalCompare website.
(click image for larger view)
Slideshow: Health IT Boosts Patient Care, Safety
As part of Medicare's safety and value-based purchasing initiatives, the Centers for Medicare and Medicaid Services (CMS) recently starting publishing additional information about hospitals' relative safety performance on its HospitalCompare website. The agency's initiative is encountering fierce opposition from hospital groups as well as the Joint Commission, the organization that accredits hospitals.
"Hospitals are working very hard to improve safety on a daily basis," Mark Chassin, MD, president of the Joint Commission, told InformationWeek Healthcare. "These [CMS] data don't really contribute much to our knowledge, understanding, or ability to improve, because they're not good quality measures."
Chassin argued that HospitalCompare's ratings of hospital performance on inpatient complications and mortality rates are flawed because they're based on claims data, not on clinical data from hospital charts or electronic health records (EHRs).
Patrick Conway, MD, chief medical officer of CMS and director of the agency's office of clinical standards and quality, agreed that clinical data is preferable to claims information for measuring hospital-acquired conditions (HACs) and deaths related to hospitalizations. But in a statement to InformationWeek Healthcare, he contended that we should not let the perfect be the enemy of the good.
"Reporting HAC data demonstrates CMS' commitment to improve patient safety," he said. "By making HAC data transparent, CMS sheds light on those preventable events where patients are harmed while seeking care."
Before adding the safety information, HospitalCompare already showed how well institutions did on various process measures, such as whether Medicare patients who'd had heart attacks received aspirins and beta blockers. Hospitals voluntarily report this data, which is extracted from their records. The website also features 30-day mortality data on patients with three primary diagnoses: heart attack, heart failure, and pneumonia.
The new postings include hospital-specific data on deaths of patients who had breathing problems after surgery, had an operation to repair a weakness in an abdominal aorta, or had a serious treatable complication after a procedure.
Additionally, HospitalCompare now shows how often particular hospitals' patients suffer complications such as a collapsed lung, a blood clot following surgery, or an accidental cut or tear during treatment. Earlier this year, CMS began posting data on other complications, such as pressure sores, falls, and catheter-related infections.
Aside from the competitive implications of comparing hospitals publicly on these measures, all of this data will be used in calculating how much hospitals might lose in Medicare reimbursement under CMS' value-based-purchasing program, scheduled to begin a year from now.
According to Chassin, the new metrics are "fatally flawed and misleading." The measures of hospital-acquired conditions, he says, are based on how secondary diagnoses--diagnoses other than the patient's primary problem--are coded for billing purposes. "That's not a reliable assessment of whether the outcome [i.e., the complication] occurred," said Chassin. "Secondary diagnoses are notoriously variable in how they're coded."
Conway insisted, however, that this data is reliable because "coders assign diagnosis codes based on the objective evidence presented to them in a medical record and in accordance with guidelines." If there's a diagnosis code that indicates a complication, he said, "this identifies a quality improvement opportunity for hospitals and provides information to consumers."
The problem with the three new mortality measures, as well as the earlier ones, Chassin said, is that CMS inadequately adjusts the data for the relative risk factors of patients when they were admitted to the hospital. Based on secondary diagnoses, the risk adjustment fails to reflect the severity of a patient's condition, he said.
Conway retorted, "We disagree that the mortality measures are flawed, and the community at large agrees with us. The risk-standardized mortality measures were developed by a team of clinical and statistical experts from Yale and Harvard universities, using a methodology that has been published in peer reviewed literature.
"The 30-day mortality measures are estimated with Medicare administrative data using models that were validated against medical record-based models. In these validation efforts, the results of the models with administrative claims and enrollment data were shown to be highly correlated with the results of models based on clinical data."
Chassin pointed out that the Joint Commission's Quality Check report card on hospital performance is based entirely on clinical data. Moreover, he said, the Joint Commission's measures have been tested in the real world and shown to be effective in helping hospitals improve their outcomes.
Ironically, CMS and the Joint Commission use many of the same "core" measures in their published reports, and the hospital data is gathered and analyzed the same way for both organizations. But not all of the measures shown in Quality Check are posted on the HospitalCompare website. For example, CMS is not publishing core measures for perinatal care or care in psychiatric hospitals.
In Conway's view, HospitalCompare, Quality Check, and other published comparisons of hospital performance can all be beneficial, despite their differences in methodology. "We welcome the availability of other data sources ... to help bring more transparency and visibility to health care quality," he stated.