7 Data Center Disasters You'll Never See Coming - InformationWeek
IoT
IoT
Cloud
News
6/7/2015
12:06 PM
Charles Babcock
Charles Babcock
Slideshows
Connect Directly
Twitter
RSS
E-Mail
100%
0%

7 Data Center Disasters You'll Never See Coming

These are the kinds of random events that keep data center operators up at night. Is your disaster recovery plan prepared to handle these freak accidents?
Previous
1 of 9
Next

(Image: toxawww via iStockphoto)

(Image: toxawww via iStockphoto)

Flood, fire, solar flare, and crash by four-wheel-drive motor vehicle: These are the potential disasters that strike fear in the imaginations of data center operators, as the following accounts will show.

Jonathan Bryce, now executive director of the OpenStack Foundation, was the twenty-something founder of the Mosso Cloud in Dallas-Fort Worth when he found himself on the receiving end of such an incident on Dec. 18, 2009.

A diabetic driver in the vicinity of the Rackspace data center where Mosso was hosted had passed out behind the wheel of his SUV, crashing into a building housing the data center's electricity transformer equipment. Mosso was still running after the crash, but that was the start of a sequence of unlikely events that led to the service's outage.

How do you prepare for such an event in your disaster plan? "It's just one of those things you have to cope with as best you can," said Bryce.

This was a sentiment echoed last year by Robert von Woffradt, CIO for the State of Iowa, in a blog post after an unexpected fire in the state's primary data center. Survivors of the lower Manhattan office buildings and hospitals flooded by Hurricane Sandy in 2012 would agree.

Even if you think you're prepared for earthquake, flood and fire, when did you last worry adequately about the danger of solar flares? A powerful solar flare incident that could have disrupted electrical transmission systems missed the earth by a narrow margin in 2012. If the eruption had been only one week earlier, Earth would have been in the line of fire, Daniel Baker of the University of Colorado told NASA Science News in 2014. The flare's effects would have struck the earth's atmosphere, leading to heavy and unexpected voltage surges in electrical lines.

You might consider such a hazard as being extremely remote, but in 1859 a solar flare's disturbance known as the Carrington Event hit the earth and produced such large voltages that the wiring of telegraph offices sparked out of control, setting some offices on fire.

CIOs and data center managers who've been through a disaster have say that the best you can do is to prepare. "Test complete loss of systems at least once a year. No simulation; take them offline," advised Wolffradt, in a blog following the State of Iowa's crisis.

Check out our list of data center disasters -- from the scary to the outright outlandish -- and tell us about your own datacenter dramas in the comments section below.

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive ... View Full Bio

Previous
1 of 9
Next
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
batye
50%
50%
batye,
User Rank: Ninja
7/2/2015 | 12:33:42 AM
Re: Check your generators!
@kbartle803 interesting to know... thanks for sharing... in my books you could never be prepared 100%... 
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
6/11/2015 | 3:15:00 PM
A narrow margin separates "chilled" from "too hot"
In no. 6, Outage by SUV, a commenter on Lew Moorman's blog post noted that a data center has about five minutes between the loss of its chillers and the start of equipment overheating. Does anyone know, is the margin really that narrow? I understand that computer equipment can operate at up to 100 degrees OK, but after that overheating starts to get dicey.
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
6/11/2015 | 3:02:16 PM
Diesel fuel stored at NYC data centers reduced by 9/11
KBartle, what about this? One of the unreported aspects of the Hurricane Sandy disaster, when New York and many places along the East Coast went dark, was that every data center in the city had a limited supply of diesel fuel on premises. That was due to new regulations, I believe from a former mayor's office after 9/11, that the flamable liquids stored inside an office building must be reduced. In some cases, that made the investment in generators irrelevant. Public transit was down, city streets were clogged and fuel delivery trucks had great difficulty getting through. There goes the disaster recovery plan.
kbartle803
50%
50%
kbartle803,
User Rank: Apprentice
6/10/2015 | 3:07:13 PM
Check your generators!
I was working at a datacenter in California that had power feeds from three different utilities, redundant battery backup, and a generator.  All three utilities went down when the same source all three were using failed.  We went to battery backup until the generator took over, it ran for about an hour until it overheated because the cooling system was rusted and clogged.  The utilities were still down, so we ran on batteries for another hour until we finally went dark.
Dave Kramer
50%
50%
Dave Kramer,
User Rank: Apprentice
6/9/2015 | 10:21:00 AM
Re: Move the backup site further away!
If I recall, the new data centers were New York (HQ), Houston, Seattle - but now realizing how hurricanes could still wipe out New York/Houston, at least Seattle might be safe from hurricanes but not Earth Quakes!?!  Maybe something central like Colorado or New Mexico where the environmental/natural diasters is less likely might be a safe bet! I'm located mid west, Saskatchewan Canada, and we've been hit with flooding in the last few years but in the lower lying parts of the Province.
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
6/8/2015 | 9:07:39 PM
Move the backup site further away!
Dave Kramer, yes, it's a good idea to move the backup data center to a different site. But Hurricane Sandy told us just how far away that second site might have to be. Moving it across town or across the state might not have been enough in that case. With Sandy, disaster recovery specialist Sungard, had flood waters lapping at the edges of its parking lots on the high ground in N.J. The advent of disaster recovery based on virtual machines makes it more feasible to move recovery to a distant site (but still doesn't solve all problems).
Dave Kramer
100%
0%
Dave Kramer,
User Rank: Apprentice
6/8/2015 | 4:58:07 PM
Re: Data Center Disasters
We were dealing with a large corporation that had it's own data center backup in the second World Trade Tower in New York. So when the 9/11 disaster struck it wiped out both data centres. 

Their new data centers six months later had their second and third backups in various other cities spread between far flung different States. Unfortunately it took such a drastic tragedy to make a new policy of not allowing a backup data center to even be within the same State - which is probably a wise move overall.

 
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
6/8/2015 | 12:48:52 PM
When the fire fighting system gets triggered by accident....
DanaRothlock, Yes, part of the problem of disaster preparedness is preventing the fire fighting system, especially when it's triggered by accident, from destroying what it's supposed to save. There's been no easy answer for years. Halon was meant to prevent water damage to the equipment. Sprinklers, on the other hand, prevent Halon damage. It's a fool's bargain with fate.
Li Tan
50%
50%
Li Tan,
User Rank: Ninja
6/8/2015 | 10:56:52 AM
Re: Data Center Disasters
This kind of accident is rare but we need to be prepared for the possible occurrence. At least for the hosts in one cluster, they should not be located in the same building, or at least the same rack.
DanaRothrock
50%
50%
DanaRothrock,
User Rank: Apprentice
6/8/2015 | 4:00:20 AM
Data Center Disasters
I know of a couple data center meltdowns.

One was a lightning bolt that burned a one-inch hole in the side of the mainframe.


Another was Halon discharge in the computer room due to cigarette fire in trash can.  The Halon destroyed all the disk drives for mainframe systems.  Halon was then replaced by water sprinklers for big savings. 
How Enterprises Are Attacking the IT Security Enterprise
How Enterprises Are Attacking the IT Security Enterprise
To learn more about what organizations are doing to tackle attacks and threats we surveyed a group of 300 IT and infosec professionals to find out what their biggest IT security challenges are and what they're doing to defend against today's threats. Download the report to see what they're saying.
Register for InformationWeek Newsletters
White Papers
Current Issue
Top IT Trends to Watch in Financial Services
IT pros at banks, investment houses, insurance companies, and other financial services organizations are focused on a range of issues, from peer-to-peer lending to cybersecurity to performance, agility, and compliance. It all matters.
Video
Slideshows
Twitter Feed
InformationWeek Radio
Archived InformationWeek Radio
Join us for a roundup of the top stories on InformationWeek.com for the week of November 6, 2016. We'll be talking with the InformationWeek.com editors and correspondents who brought you the top stories of the week to get the "story behind the story."
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.
Flash Poll