The demonstration transferred the vital signs data to Google Glass, itself still in the beta stage, from Philips IntelliVue, a system that aggregates patient data from monitors and provides clinical decision support. Brent Blum, lead for wearable device R&D at Accenture Technology Labs, told InformationWeek Healthcare that the two companies created their own interface because the Google Glass Mirror API is still limited.
The advantage of being able to see the vital signs on Google Glass, Blum said, is that the surgeon doesn't have to turn his head away from the patient to look at a monitor. In another use case, a doctor could walk into a patient's room and begin talking to the patient while looking at the key data from an EHR on Google Glass. This could reduce the barrier that arises between doctor and patient when the doctor has to look at a computer screen to get this information, said Frances Dare, managing director of Accenture's connected health business.
[ Google Glass is making other inroads into the commercial realm. Read Google Glass Gets Road Test. ]
Google Glass in hospitals also could, according to a news release, be used to:
-- Call up images and other patient data by clinicians from anywhere in the hospital.
-- Access a pre-surgery safety checklist.
-- Give clinicians the ability to view the patient in the recovery room after surgery.
-- Conduct live, first-person point-of-view videoconferences with other surgeons or medical personnel.
-- Record surgeries from a first-person point-of-view for training purposes.
Google Glass and other wearable displays offer four modes of interaction, Blum noted. Users can touch them, speak to them, tilt their heads or gaze in a certain direction to command the displays to do certain things.
"The prototype we worked on factors in the need for a sterile environment in the OR," he said. "Before the doctor scrubs in, they can tap the side of the display. But later, they're using voice and head tilt to advance it."
Among the possibilities for Glass's voice-recognition capability, he added, are enabling a doctor to control equipment in the OR. A clinician can already use the speech recognition to document his observations in an EHR.
Blum said there's little chance of Glass distracting a surgeon while she's operating, because she'd have to look up and to the right to see the display. Another type of wearable known as a full-field immersive display has a semi-transparent screen that does not block a user's vision, he noted. In the OR, a surgeon wearing this kind of display device could see the site for his incision while also checking the instructions for the procedure.
Google Glass is not the only kind of technology that allows touch-free manipulation of information in the OR. With a device based on Microsoft Kinect technology, for example, some surgeons have used arm gestures to manipulate images on a computer screen while maintaining a sterile environment.
Asked whether Google Glass could be used in conjunction with Kinect, Blum responded, "Gesture control is another option for controlling the display or controlling the other devices in the room. It brings another weapon to the arsenal."
One big challenge in using Glass to provide information during care is that its display is very small. That's fine for vital signs, but might be problematic if a doctor wants to see the summary screen of an EHR.
Blum noted that small screens in general can be a problem in moving from a PC-based to mobile view of EHR data. He believes that vendors will adapt their software to provide only the most critical information to users of Google Glass.
The Google Glass prototype exemplifies what Accenture Technology Labs is all about, he said. "We focus on identifying emerging technologies that are enterprise relevant," he said. "We apply those in new and unique ways to address business challenges and opportunities for our clients."
In this case, he noted, Philips is both the client and a collaborator. But it's up to that company to decide what to do next with the research.