Why Did Amazon Web Services Shut Down WikiLeaks?

If the real reason is the impact denial of service attacks had on other customers' EC2 service, what does that mean for less controversial cloud users hit by hackers?

Charles Babcock, Editor at Large, Cloud

December 2, 2010

3 Min Read
InformationWeek logo in a gray background | InformationWeek

What prompted Amazon Web Services to give WikiLeaks the boot off its EC2 servers?

Surely the theft of and airing of thousands of secret diplomatic cables gave someone in the U.S. State Department or Department of Homeland Security the initiative to call AWS and advise a shutdown. I wouldn't be surprised to learn such a thing happened.

But there was another reason for doing so, people responsible for the CloudSleuth monitoring system tell me. WikiLeaks was running on servers in Amazon's data center in Dublin, Ireland. CloudSleuth maintains monitoring stations around the world, constantly checking the response times of web services running in 30 different cloud centers, including EC2’s Dublin data center.

As the revelations became more damaging, someone attempted to slow or halt the outflow of WikiLeaks information through denial of service (DoS) attacks, where the host servers are subjected to so many automated requests that they can't deal with legitimate traffic. Cloud performance can vary due to network vagaries around the world, but CloudSleuth operators figured they should be able to detect the impact of the attacks from their London station, which was close enough to Dublin for the network issue to become negligible. And were they affecting EC2's performance out of Dublin?

Amazon Web Services spokesmen have not responded to requests for comment on the situation. But CloudSleuth practitioners were less reticent. "To quote the star of Sarah Palin's Alaska, 'You betcha!'" wrote John J. Krcmarik, a member of the CloudSleuth development team at Compuware, wrote in a blog Dec. 2.

Prior to the onset of the denial of service attacks, CloudSleuth found its own application in Dublin issuing a consistent response time of 1.3 to 1.4 seconds through November. On Nov. 29th, it spiked to 2.4 seconds, a 58% increase. There was a corresponding performance spike around the world at the 29 other monitoring stations, which appeared to bump up response times by 50%, Krcmarik wrote in his blog. That means the denial of service attacks directed at one customer were absorbing EC2's resources at the expense of some other customers and hurting their application performance.

On a business basis, Amazon Web Services was justified in making an executive decision and telling its WikiLeaks customer to go elsewhere, or simply curtailing its operation singlehandedly, which WikiLeaks posts indicate is what happened.

But this still raises awkward questions for advocates of cloud computing and AWS itself. Under what circumstances does AWS feel justified in shutting down a customer? Did it take a call from the State Department, and if so, what rank of public official gets to make that call? Surely not the administrative assistant to the under secretary.

On the other hand, it may have decided on its own that enough was enough and it wasn't in business to satisfy the demands of denial of service attacks. If so, what happens if someone formulates such an attack against your public-facing application in the cloud? AWS would likely treat a valued customer differently from how it treated WikiLeaks, and an application's owner and the cloud service provider would together find a way to shut out the attacks or shift operations to a dedicated and protected server.

It may be that the WikiLeaks incident illustrates something I thought was highly unlikely in the cloud: the service provider finds the traffic too debilitating to allow the customer to continue operations. Amazon Web Services can clarify what decisions it made -- and what decisions were made for it -- in this case, but right now, it's working on how to handle the fallout from this brouhaha and put an acceptable public face on it.

About the Author

Charles Babcock

Editor at Large, Cloud

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive Week. He is a graduate of Syracuse University where he obtained a bachelor's degree in journalism. He joined the publication in 2003.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights