Ever since the glitch-ridden launch of the site on October 1, IT experts of all stripes have been weighing in with their thoughts on what went wrong and why. InformationWeek last week conducted a series of interviews with experts in which we asked for a more prescriptive analysis: How do you recover from an IT disaster on the scale of HealthCare.gov?
I recap some of the common themes (and differences of opinion) below, but we'd also like your point of view. Have you ever been involved in a technology turnaround on a big, important project that failed at launch? Did you see that initial failure coming? What are the most successful strategies you've seen for reorganizing for success?
It's hard not to revert to the "blame game" -- pointing out what should have been done in the first place but wasn't -- and there's plenty more to say about that. But put yourself in the shoes of Jeffrey Zients, the man drafted to get the HealthCare.gov project on track. Would you have deployed additional manpower in a "tech surge?" Would you have shut the site down while making repairs? Would you have outsourced more (or fewer) functions? And when you report back to President Obama, which specific recommendations would you make about federal government procurement reform or IT project management strategies? How would you make those changes stick, so that the next president, Republican or Democrat, will be able to make big e-government promises and keep them?
[Read our complete coverage of the HealthCare.gov launch here.]
This exercise shouldn't be about what you think of the Affordable Care Act, a.k.a. Obamacare. Although the two issues have become intertwined, the performance of the website and the wisdom of the law are two different things.
Whether you're cheering for the success or demise of Obamacare, the facts are the facts, and the fact is this technology implementation undercut the policy implementation. I think the president and his top advisers made far too many assumptions about how easy it would be to launch a consumer-friendly insurance shopping website. They failed to comprehend how much bigger a job it is than fielding, say, a campaign website.
Martin Abbott and Michael Fisher, a couple of scalability consultants and authors we interviewed, said the government's mistakes weren't so different from the ones they've seen many private companies make when they misjudge the needed scalability and capacity of their Web systems. The one big difference they see is the partisan environment surrounding the project, where those trying to salvage HealthCare.gov and the programs it represents are competing for attention with those who want them to fail.
Here are a few key questions surrounding the HealthCare.gov failure:
Should it have been shut down? One of the things that didn't happen but should have, according to one camp, was to shut down the website while it was being fixed, on the premise that it was launched prematurely and clearly wasn't ready. We've heard this line of thinking from computer security experts, both in interviews and Congressional testimony. The overall performance of the website doesn't inspire confidence in its security, and there's also evidence that warnings of security flaws were ignored during the site's construction.
The implication is that the Obama administration failed to shut down the site for "political reasons," because it would have meant admitting defeat and delaying implementation of the law.
Abbott and Fisher disagree, saying the best way to fix an underperforming website is to make incremental changes and test them against real-world traffic, not against simulated load tests. Even security problems are better solved with a methodical approach rather than a panicked shutdown, they said.
Also, one of the original sins of the Healthcare.gov implementers was that they sent it live in a big bang, trying to bring the site up to full capacity in one day rather than having some sort of beta period or soft launch to gradually build capacity and test functionality. If the site were taken offline and relaunched, that approach would amount to a second big bang -- and, likely, big problems, Abbott and Fisher said.
Bryce Williams, the founder of Extend Health and now managing director of exchanges for Towers Watson, agreed that no going concern -- inside government or outside -- would willingly shut down its primary website. A business would be even more likely to keep an underperforming site operating while working to improve it.
What would you do? Shut it down, or not? Why?
Is this a job for government IT? Former Air Force Col. Mark Douglas, who as a military IT leader oversaw some technology turnarounds, acknowledged the limits of the government's capability to build complex systems on its own. "The US government is not the leader in commercial IT, nor should it be," he said. Government employees must focus on the inherently governmental aspects of a program, creating an efficient division of labor with private industry partners.
On the other hand, one of the villains in this story is the government's process for procuring IT services. David Blumenthal, a former director of the Office of the National Coordinator for Health IT, argues that managers most involved in an IT project's requirements often are kept at arm's length from the people who select contractors to deliver on those requirements, making for unnecessary dysfunction.
Extend Health's Williams suggested another way the government could partner with private industry, by letting private insurance brokers (such as his) handle more of the consumer enrollments. Under that scenario, the government would focus on services that only it can provide, like providing an online determination of eligibility for federal subsidies, while private firms would be responsible for providing "a spectacular user experience," something he doesn't see as a government core competency.
What parts would you outsource? What would you keep in government hands? And if services are to be outsourced, how must the contracting process change?
Does a "tech surge" make sense? Drawing an analogy from military operations in Iraq, the Obama administration decided to address HealthCare.gov’s problems with a "tech surge" of additional manpower, effort, and oversight. The experts we interviewed were ambivalent about the value of such a surge, saying that it's difficult for new people, no matter how talented, to come up to speed on a mature IT project and figure out what needs to be done to fix it. On the other hand, the existing team members may know what needs to be done but aren't empowered to do it.
"On mega, custom-coded IT implementations that trip before the finish line, many organizations want to throw money at the problem in the form of outsider IT SWAT teams. Keep your money," Col. Douglas advised, "unless your program team is literally incapable of fixing the problems." He added that it's "extremely difficult, expensive, and time-consuming to bring in new people to work on custom code."
Abbott and Fisher said project leaders could put additional manpower to good use as long as they focus it on identifying and correcting problems -- the hundreds of bugs the developers have been fixing over the past couple of months.
What do you say? Surge or no surge? Keep the project team or throw the bums out?
Those are some of our key questions. We'd like your answers. Also, what are the questions we forgot to ask?
Though the online exchange of medical records is central to the government's Meaningful Use program, the effort to make such transactions routine has just begun. Also in the Barriers to Health Information Exchangeissue of InformationWeek Healthcare: why cloud startups favor Direct Protocol as a simpler alternative to centralized HIEs. (Free registration required.)
InformationWeek Must Reads Oct. 21, 2014InformationWeek's new Must Reads is a compendium of our best recent coverage of digital strategy. Learn why you should learn to embrace DevOps, how to avoid roadblocks for digital projects, what the five steps to API management are, and more.