If a CDS system isn't based on rock-solid data, don't expect to see better patient outcomes or long-term cost savings.

Paul Cerrato, Contributor

April 4, 2012

4 Min Read

5 Key Elements For Clinical Decision Support Systems

5 Key Elements For Clinical Decision Support Systems


5 Key Elements For Clinical Decision Support Systems (click image for larger view and for slideshow)

There are few things more embarrassing than misplaced self-confidence, especially when it resides in an expert who thinks he has the answers when in fact he doesn't see the whole picture. That realization came to mind as I was reading an editorial in this weekend's Wall Street Journal entitled, "Rise of the Medical Expertocracy."

Pamela Hartzband, MD, and Jerome Groopman, MD, from Harvard Medical School, discuss the U.S. Preventive Services Task Force's 2009 practice guideline recommending against mammography screening of women under the age of 50. One statistician on the task force even went as far as to say the recommendation a "no brainer," according to the WSJ editorialists.

The American Cancer Society's (ACS) experts certainly didn't think so. In fact they urge women under 50 to have mammograms and offer lots of good evidence to support that position.

There are at least two take-aways for CIOs and CMIOs: Don't build your clinical decision support system on shaky ground. And give individual clinicians some wiggle room. A CDSS should provide guidelines not rigid rules.

Regarding the first point, practice guidelines that populate a CPOE and serve as the underpinning for your EHR alerts must be based on solid evidence and a general consensus among all the major expert organizations. The USPSTF recommendations contradicted the guidelines of other well-respected experts, including the ACS, so they're hardly the grounds for universal adoption among hospitals and medical practices.

[ Practice management software keeps the medical office running smoothly. For a closer look at KLAS' top-ranked systems, see 10 Top Medical Practice Management Software Systems. ]

By way of contrast, no one questions the need to track patients' smoking status--a Meaningful Use reg--because the evidence behind such a recommendation is solid and universally accepted. You need to have a baseline headcount before you can measure how successful you are in getting patients to quit this life-threatening habit.

Similarly, the recommendation to do colonoscopy screening in adults over age 50 rests on solid ground because it's been shown to reduce colon cancer deaths. But your CDSS is stepping into quicksand if, for example, it requires clinicians to run PSA levels on all men over age 50 in the expectation that it reduces prostate cancer. The proof just isn't there. And simply put, if the experimental data hasn't shown PSA screening will detect prostate cancer in its early stage, you can't expect better clinical outcomes or the cost of care to drop from putting such a guideline into your CDSS.

Regarding the second point, clinicians need a measure of autonomy as they decide which guidelines apply to individual patients. As I've mentioned before in this column, many doctors see medicine as an art as much as a science, and as such believe it can't be distilled into a series of evidence-based guidelines and rules.

One source of skepticism about practice guidelines is that they're usually based on the results of large clinical trials that use exclusion criteria to ensure that the patient population being studied is free of other chronic disorders that might skew the results. A trial evaluating a drug for hypertension, for example, would include patients that have only hypertension.

Such exclusion criteria help investigators get pure data, but the results don't mimic the real world, where doctors often treat patients suffering from a variety of these "co-morbidities." It's unclear then that applying such study results in community practice is going to either improve patient outcomes or expenditures.

Virtually all the movers and shakers in health IT realize that medical practice has to be data driven. But that data has to be really strong and it has to be properly analyzed. And sometimes the results of that analysis should be: We don't have enough data to make any recommendation. If major medical policy makers can't agree on how to apply research data to patient care, why should CIOs, CMIOs, or clinicians in the trenches place their confidence in such quicksand?

The 2012 InformationWeek Healthcare IT Priorities Survey finds that grabbing federal incentive dollars and meeting pay-for-performance mandates are the top issues facing IT execs. Find out more in the new, all-digital Time To Deliver issue of InformationWeek Healthcare. (Free registration required.)

About the Author(s)

Paul Cerrato

Contributor

Paul Cerrato has worked as a healthcare editor and writer for 30 years, including for InformationWeek Healthcare, Contemporary OBGYN, RN magazine and Advancing OBGYN, published by the Yale University School of Medicine. He has been extensively published in business and medical literature, including Business and Health and the Journal of the American Medical Association. He has also lectured at Columbia University's College of Physicians and Surgeons and Westchester Medical Center.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights