Grahame Grieve, creator of a health data integration model based on RESTful Web services, discusses how a simplified approach could influence the future of health data with Georgia Tech health informatics expert Mark Braunstein.
If we are to free the data in healthcare that is too often locked inside isolated systems, simplicity matters.
In an arguably seminal 2011 blog post, The Rise and Fall of HL7, Eliot Muir, founder and CEO of Interfaceware, a major Toronto-based HL7 solutions provider, wrote, "Complicated standards can be pushed for a while but ultimately markets reject them."
Muir went on to recommend a Web services approach. Australian standards guru Grahame Grieve added a simplified data model based on HL7 RIM in what has come to be called Fast Healthcare Interoperability Resources (FHIR). It's now a part of HL7 and seems to be spreading like its homonym, fire.
I've mentioned Grahame and FHIR previously, but I recently had a chance to interview him for a new course in health informatics I'm developing at Georgia Tech as part of our first-of-its-kind online masters of computer science degree program. I'd like to share Graham's key points with you here.
Q: Graham, please tell us a bit about yourself and how you got into data and interoperability standards.
Grieve: I was initially a laboratory scientist and then a development lead for a laboratory information systems company, where much of my work was with data integration. From there I got involved in HL7 and found I was more interested in the standards than laboratory science, so I founded Health Intersections and became a consultant in the area.
Q: I've earlier quoted from Eliot Muir's blog post. Can you expand on it a bit?
Grieve: We've spent 10-15 years developing very complex standards in the pursuit of getting full control of the complexity of healthcare. People just couldn't make them work and the market rejected the more complicated variants. National health programs squandered a great deal of money in the pursuit of the more complex components of these standards. Even relatively simple components, such as CDA, are generating a lot of backlash from the marketplace. In short, HL7 needed to find a fresh approach.
Q: This, of course, led to FHIR. How did that evolve?
Grieve: We started by looking at how people who were having success with standards were doing it. The answer was invariably RESTful Web services -- a set of technologies and paradigms about information exchange to create something that was simple, stable, easy to use, and technically very scalable. Also, a solid framework for security and authentication had developed, all around support of the Web. FHIR is simply taking those base technologies and welding them with what we know about health information to create an easy-to-use specification for exchanging health information.
Q: FHIR resources, which are essentially the FHIR data model, seeks to encompass those clinical and other concepts that are most commonly used, including in the care of complex patients. How are the resources that are encompassed by FHIR being defined and how will that evolve?
Grieve: It's fundamentally an iterative process. We have workgroups of domain experts from around the world, in particular clinical and business process experts within healthcare. We draft the standards based on the requirements they define. We then work with a number of health programs and vendors to test the standards and evolve them to something that can become part of a draft standard. We will then repeat the process until we have something that we feel can become part of a normative standard (probably a couple of years out). We're not in a hurry because once adopted, normative standards aren't changeable, so we want as much input into our process as possible before we get to that point.
Q: How has FHIR adoption been going?
Grieve: We expected that we'd initially have a small amount of engagement, particularly in the personal health record space, a new area still being defined and one in which the FHIR concept already fits well. It hasn't worked out that way, and in fact, we're getting broad adoption from many subdomains within healthcare. More, in fact, than we can deal with. We didn't expect this, particularly with a new, changeable early beta standard. It's really tenfold more than we expected. It appears that the overall argument for FHIR is so strong that it's driving adoption well ahead of expectation.
Q: As you look ahead a few years, what do you see?
Grieve: We're working to make data exchange cheaper because it's easier to do. Beyond data exchange, we need interoperable workflows and processes. We can't have that now because we can't exchange data, but once we can, I expect that to be the next focus. Over the next few years the focus will shift from data interoperability to clinical interoperability.
Looking past that, I would like to see an era where healthcare data availability is no longer the problem, but rather, what can you do with the data? Innovations like patient-centered healthcare will depend on clinical process aggregation, so we'll likely see consolidation into large-scale provider networks that will probably even cross jurisdictional boundaries, again based on the facile ability to aggregate health data and provide a better healthcare process. However, we're still a long way from that happening.
Cyber criminals wielding APTs have plenty of innovative techniques to evade network and endpoint defenses. It's scary stuff, and ignorance is definitely not bliss. How to fight back? Think security that's distributed, stratified, and adaptive. Get the Advanced Attacks Demand New Defenses report today. (Free registration required.)
Mark Braunstein is a professor in the College of Computing at Georgia Institute of Technology, where he teaches a graduate seminar and the first MOOC devoted to health informatics. He is the author of Contemporary Health Informatics (AHIMA Press, 2014) as well as Health ... View Full Bio
InformationWeek Tech Digest, Nov. 10, 2014Just 30% of respondents to our new survey say their companies are very or extremely effective at identifying critical data and analyzing it to make decisions, down from 42% in 2013. What gives?