MModal Brings Speech Recognition To Clinical Decision Support
Technology introduced even as UC-Berkeley studies shortcomings of speech recognition.
5 Key Elements For Clinical Decision Support Systems
(click image for larger view and for slideshow)
The launch of a speech recognition-based clinical decision support platform in the cloud by Franklin, Tenn.-based MModal is the latest step in the growth of systems that pull actionable information from unstructured electronic medical data. But some researchers believe current technologies are flawed and are kicking off an effort to pinpoint and
then improve some of the shortcomings.
MModal, formerly known as MedQuist, this week introduced the first two applications in its new MModal Catalyst suite of products. One, called MModal Catalyst for Quality, puts data into context so provider organizations can improve documentation and coding, as well as meet requirements for Meaningful Use of electronic health records (EHRs). The other, MModal Catalyst for Radiology, structures information from radiology reports.
"So much of the data today that's valuable is locked up in unstructured data," Mike Raymer, senior VP of solutions management at MModal, told InformationWeek Healthcare. "We take every clinical observation and encode it with SNOMED," he said. Similarly, prescription data gets encoded according to the RxNorm ontology and laboratory reports are matched to the Logical Observation Identifiers Names and Codes (LOINC) system.
This form of natural language processing--what MModal calls "natural language understanding"--helps with context to produce more accurate coding and documentation without having to perform full chart audits, according to Raymer. For example, in looking for whether a hospital administered aspirin to someone complaining of chest pains, the technology can search the patient's chart to identify mentions of chest pain.
"These will be tools used by providers as payers impose value-based reimbursement," Raymer explained.
In the next three to four years, MModal expects to have 35-45 applications as part of the Catalyst suite, including modules specific to nursing documentation, long-term care, and home care, and for various medical specialties. "Our learning engine could be applied to readmissions management," Raymer said.
Catalyst builds upon MModal Fluency, a service introduced last month that adds cloud-based speech capture to EHRs. Together, the MModal offerings are similar to what IBM and Nuance Communications, through their partnership with the University of Pittsburgh Medical Center, are doing with similar technology called clinical language understanding. "It's immediate feedback," Raymer said.
But is speech recognition accurate enough for precision applications such as healthcare?
The International Computer Science Institute (ISCI), a research lab at the University of California at Berkeley, disclosed this week that it is in the midst of a yearlong study of the limitations and challenges of current automatic speech recognition technologies.
"This is a unique research project in that we are qualitatively and quantitatively exploring what is wrong with automatic speech recognition. From that we hope to gain insights into how we can improve ASR, potentially going forward in entirely new directions,'' ICSI deputy director and project leader Nelson Morgan, said in a statement.
"When you don't know specifically what is wrong with a technology, you are left with a hit-or-miss situation. This research should give us some clarity," Morgan explained.
The research project, set to run through March 2013, will examine the scientific assumptions behind acoustic modeling to help identify potential technical challenges. It also will survey experts in the field of speech recognition to gauge their opinions about what does and does not work.
Get the new, all-digital Healthcare CIO 25 issue of InformationWeek Healthcare. It's our second annual honor roll of the health IT leaders driving healthcare's transformation. (Free registration required.)