The Obamacare website launch was an embarrassment. Whether it's truly "fixed" is questionable. What actions would you take, IT pros, and what lessons have you learned?

David F Carr, Editor, InformationWeek Government/Healthcare

December 4, 2013

7 Min Read

The HealthCare.gov website, the IT centerpiece for implementing Obamacare, is said to be "fixed" now -- although there are still plenty of reports of site access problems and deeper glitches in the software.

Ever since the glitch-ridden launch of the site on October 1, IT experts of all stripes have been weighing in with their thoughts on what went wrong and why. InformationWeek last week conducted a series of interviews with experts in which we asked for a more prescriptive analysis: How do you recover from an IT disaster on the scale of HealthCare.gov?

I recap some of the common themes (and differences of opinion) below, but we'd also like your point of view. Have you ever been involved in a technology turnaround on a big, important project that failed at launch? Did you see that initial failure coming? What are the most successful strategies you've seen for reorganizing for success?

It's hard not to revert to the "blame game" -- pointing out what should have been done in the first place but wasn't -- and there's plenty more to say about that. But put yourself in the shoes of Jeffrey Zients, the man drafted to get the HealthCare.gov project on track. Would you have deployed additional manpower in a "tech surge?" Would you have shut the site down while making repairs? Would you have outsourced more (or fewer) functions? And when you report back to President Obama, which specific recommendations would you make about federal government procurement reform or IT project management strategies? How would you make those changes stick, so that the next president, Republican or Democrat, will be able to make big e-government promises and keep them?

[Read our complete coverage of the HealthCare.gov launch here.]

This exercise shouldn't be about what you think of the Affordable Care Act, a.k.a. Obamacare. Although the two issues have become intertwined, the performance of the website and the wisdom of the law are two different things.

Whether you're cheering for the success or demise of Obamacare, the facts are the facts, and the fact is this technology implementation undercut the policy implementation. I think the president and his top advisers made far too many assumptions about how easy it would be to launch a consumer-friendly insurance shopping website. They failed to comprehend how much bigger a job it is than fielding, say, a campaign website.

Martin Abbott and Michael Fisher, a couple of scalability consultants and authors we interviewed, said the government's mistakes weren't so different from the ones they've seen many private companies make when they misjudge the needed scalability and capacity of their Web systems. The one big difference they see is the partisan environment surrounding the project, where those trying to salvage HealthCare.gov and the programs it represents are competing for attention with those who want them to fail.

Here are a few key questions surrounding the HealthCare.gov failure:

Should it have been shut down?
One of the things that didn't happen but should have, according to one camp, was to shut down the website while it was being fixed, on the premise that it was launched prematurely and clearly wasn't ready. We've heard this line of thinking from computer security experts, both in interviews and Congressional testimony. The overall performance of the website doesn't inspire confidence in its security, and there's also evidence that warnings of security flaws were ignored during the site's construction.

The implication is that the Obama administration failed to shut down the site for "political reasons," because it would have meant admitting defeat and delaying implementation of the law.

Richard Spires, a former Department of Homeland Security CIO, said he would have recommended shutting down the site for repairs -- politics aside. "It takes a lot of energy to run the site," Spires said, and that energy needed to be diverted into fixing functionality and testing performance.

Abbott and Fisher disagree, saying the best way to fix an underperforming website is to make incremental changes and test them against real-world traffic, not against simulated load tests. Even security problems are better solved with a methodical approach rather than a panicked shutdown, they said.

Also, one of the original sins of the Healthcare.gov implementers was that they sent it live in a big bang, trying to bring the site up to full capacity in one day rather than having some sort of beta period or soft launch to gradually build capacity and test functionality. If the site were taken offline and relaunched, that approach would amount to a second big bang -- and, likely, big problems, Abbott and Fisher said.

Bryce Williams, the founder of Extend Health and now managing director of exchanges for Towers Watson, agreed that no going concern -- inside government or outside -- would willingly shut down its primary website. A business would be even more likely to keep an underperforming site operating while working to improve it.

What would you do? Shut it down, or not? Why?

Is this a job for government IT? 
Former Air Force Col. Mark Douglas, who as a military IT leader oversaw some technology turnarounds, acknowledged the limits of the government's capability to build complex systems on its own. "The US government is not the leader in commercial IT, nor should it be," he said. Government employees must focus on the inherently governmental aspects of a program, creating an efficient division of labor with private industry partners.

On the other hand, one of the villains in this story is the government's process for procuring IT services. David Blumenthal, a former director of the Office of the National Coordinator for Health IT, argues that managers most involved in an IT project's requirements often are kept at arm's length from the people who select contractors to deliver on those requirements, making for unnecessary dysfunction.

Extend Health's Williams suggested another way the government could partner with private industry, by letting private insurance brokers (such as his) handle more of the consumer enrollments. Under that scenario, the government would focus on services that only it can provide, like providing an online determination of eligibility for federal subsidies, while private firms would be responsible for providing "a spectacular user experience," something he doesn't see as a government core competency.

What parts would you outsource? What would you keep in government hands? And if services are to be outsourced, how must the contracting process change?

Does a "tech surge" make sense?
Drawing an analogy from military operations in Iraq, the Obama administration decided to address HealthCare.gov’s problems with a "tech surge" of additional manpower, effort, and oversight. The experts we interviewed were ambivalent about the value of such a surge, saying that it's difficult for new people, no matter how talented, to come up to speed on a mature IT project and figure out what needs to be done to fix it. On the other hand, the existing team members may know what needs to be done but aren't empowered to do it.

"On mega, custom-coded IT implementations that trip before the finish line, many organizations want to throw money at the problem in the form of outsider IT SWAT teams. Keep your money," Col. Douglas advised, "unless your program team is literally incapable of fixing the problems." He added that it's "extremely difficult, expensive, and time-consuming to bring in new people to work on custom code."

Abbott and Fisher said project leaders could put additional manpower to good use as long as they focus it on identifying and correcting problems -- the hundreds of bugs the developers have been fixing over the past couple of months.

What do you say? Surge or no surge? Keep the project team or throw the bums out?

Those are some of our key questions. We'd like your answers. Also, what are the questions we forgot to ask?

Follow David F. Carr on Twitter @davidfcarr or Google+. He is the author of Social Collaboration For Dummies (October 2013).

Though the online exchange of medical records is central to the government's Meaningful Use program, the effort to make such transactions routine has just begun. Also in the Barriers to Health Information Exchange issue of InformationWeek Healthcare: why cloud startups favor Direct Protocol as a simpler alternative to centralized HIEs. (Free registration required.)

About the Author(s)

David F Carr

Editor, InformationWeek Government/Healthcare

David F. Carr oversees InformationWeek's coverage of government and healthcare IT. He previously led coverage of social business and education technologies and continues to contribute in those areas. He is the editor of Social Collaboration for Dummies (Wiley, Oct. 2013) and was the social business track chair for UBM's E2 conference in 2012 and 2013. He is a frequent speaker and panel moderator at industry events. David is a former Technology Editor of Baseline Magazine and Internet World magazine and has freelanced for publications including CIO Magazine, CIO Insight, and Defense Systems. He has also worked as a web consultant and is the author of several WordPress plugins, including Facebook Tab Manager and RSVPMaker. David works from a home office in Coral Springs, Florida. Contact him at [email protected]and follow him at @davidfcarr.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights