One of my recent columns outlined problems with the Food and Drug Administration's proposed rules on mobile medical applications. Evidently, I'm not the only one with concerns.
The American Medical Informatics Association recently issued a statement in which it questioned the wisdom of lumping all such apps together. One critical factor in determining the risk posed by clinical decision support software is whether the app is "mediated by a human being or not," says Meryl Bloomrosen, the AMIA's VP for public policy and government relations.
The AMIA thinks the most rigorous regulation should be reserved for clinical decision support programs that automatically intervene in the patient care process without a human middleman.
For example, an app that automatically instructs an IV medication pump or insulin infusion pump--so-called smart pumps--to change dosages requires more regulatory oversight than a mobile app that offers clinicians advice on how to diagnose or treat patients, according the AMIA. That distinction makes sense: An app that lets a trained healthcare professional evaluate the software's recommendation inserts a safety valve into the workflow process and is probably less risky.
[Which healthcare organizations came out ahead in the IW500 competition? See 10 Healthcare IT Innovators: InformationWeek 500.]
At a recent FDA workshop on the mobile medical apps draft guidelines, Bloomrosen described a descending order of risk, implying a diminishing need for regulation of such software programs. At the top of the list are apps to control devices like the smart pumps mentioned above. Lower down the list would be mobile apps that directly intervene in patient care but for which there is a clinician intermediary to provide oversight. Even further down the risk ladder and presumably requiring less FDA regulation would be human-mediated clinical decision support that's not patient-specific--a mobile app that provides access to a poison-control database for use in managing patients in an emergency setting is a good example.
The AMIA also wonders why the FDA is placing so much emphasis on mobile apps related to clinical decision support (CDS). Don't CDS systems that reside on a hospital's servers pose just as much--or as little--threat to patient safety as systems that reside on smartphones and iPads?
Regardless of how the FDA's final rules on mobile medical apps turn out, it's clear they've generated a lot of industry concern. A recent report from the Consumer Electronics Association indicates that more than a third of consumers are willing to send medical data to their doctor over a wireless device. That finding has prompted Ben Arnold, a senior research analyst at the CEA, to conclude: "With wider adoption on the part of the user and recommender [the doctor], mechanisms will likely need to be put in place to ensure the data from these devices is correct and any recommendations made are warranted."
I don't envy the FDA's mandate: Finding the balance between over and under-regulation is no easy task. But from all the comments I've seen to date, it's pretty clear the agency needs to do a lot more homework before issuing their final guidelines.