More Mobile EHRs Add Speech Recognition

Third-party interfaces to iPad native applications allow more EHR vendors to voice-enable their products.

Ken Terry, Contributor

June 10, 2013

4 Min Read
InformationWeek logo in a gray background | InformationWeek

9 Mobile EHRs Compete For Doctors' Attention

9 Mobile EHRs Compete For Doctors' Attention


9 Mobile EHRs Compete For Doctors' Attention (click image for larger view and for slideshow)

While some companies still lag behind, EHR vendors are moving rapidly to enable their mobile products with speech recognition, either directly or through third-party interface vendors.

In the first category is Cerner, which just last month integrated Nuance Communications' speech recognition product with its ambulatory mobile EHR for iPads, according to Jon Dreyer, director of mobile solutions marketing for Nuance. In an interview with InformationWeek Healthcare, Dreyer added that in January, Epic embedded Nuance in its latest mobile EHRs for the iPhone and iPad. Allscripts, which voice-enabled its Sunrise inpatient mobile EHR some time ago, doesn't yet have speech in its Wand ambulatory EHR. But an Allscripts spokesperson told InformationWeek Healthcare that Wand will be integrated with the MModal and Apple Siri voice recognition applications later this summer. [ A recent usability survey puts Athenahealth on top. Read Athenahealth EHR Wins Usability Poll. ] Other major companies are also using third-party vendors to provide an iPad-native front end that includes speech recognition to their EHRs. For example, Dreyer said, Nuance is integrated with Iconx, which makes a mobile interface for NextGen, and with MedMaster Mobility, which does the same for Greenway. Small independent vendors have also built speech-enabled mobile EHRs for certain specialty areas, such as emergency departments, urgent care centers, and dermatology. For example, Nuance is embedded in Sparrow EDIS, Montrue Technologies' ED-specific iPad application, and Touch Medix's Lightning Charts, also designed for the ED. Modernizing Medicine and EZDerm have devised mobile EHRs specifically for dermatologists. "The reality is that it's impractical to do any kind of documentation on mobile devices if you don't have speech," Dreyer stated. "Nobody's going to type into this thing or peck away at the onscreen keyboard. So the feedback from customers in the market today that are leveraging this has been very positive." This is especially true in very busy environments such as the ED and in specialties where physicians must give their full attention to physical exams, Dreyer said. "It's either where the users themselves are mobile and jumping from one patient to the next, or where the physician is more hands on and having a computer between physician and patient gets in the way." The ability to use speech for ordering simple meds is already available on some speech-enabled mobile products. But much more is coming down the line. To begin with, Dreyer noted, Nuance wanted to make sure that its cloud-based speech recognition product for mobile EHRs was very fast and accurate. "That's the number one thing that the user needs and wants," he said. But now some of Nuance's developer partners are beginning to add "command and control" features for information retrieval. Introduced by Nuance less than year ago, these features allow users to ask the application to show them things like their patient lists, a particular patient's lab results, or when the patient's next appointment is. By the end of the year, Dreyer said, Nuance will introduce the next generation of command and control, which uses interactive features that depend on its "clinical language understanding" (CLU), a form of natural language processing. This "virtual assistant," Dreyer explained, will manage dialog between the user and the application. "For example, if a doctor wants to start a patient on a medication," he said, "the system can ask for clarifying information based on the information it needs. It knows it needs medication, dosage, frequency, dispensed amount, and refills in order to place a med order. So it'll continue to ask that, and CLU is parsing out the information you're telling it so you can get a string of information, and if you miss anything, it will let you know. It will also look into the drug interaction tables in the EMR and leverage that content and present it to the user in a very natural flow." How does the speech recognition application access the EHR database to get the drug interaction rules? All mobile EHRs are thin clients interacting with a server that stores the EHR application and the data. So Nuance's cloud-based application can communicate with the server that stores the EHR to get the necessary information. One thing that's still unclear, however, is how much clinicians want to trust a speech recognition system that could potentially misinterpret what they say when they're ordering a medication or a lab test. This has been a challenge for Intermountain Healthcare and MModal, which are jointly developing a speech recognition system that can be used in computerized physician order entry (CPOE). To date, their beta version has been used only for prescribing common, frequently prescribed meds. Lab and imaging orders and nursing orders are next on the roadmap.

About the Author

Ken Terry

Contributor

Ken Terry is a freelance healthcare writer, specializing in health IT. A former technology editor of Medical Economics Magazine, he is also the author of the book Rx For Healthcare Reform.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights