Amazon Web Services Apologizes, Explains Outage
The company said it will provide 10 days of service credit for customers using AWS resources in the affected region because of the multi-day outage.
Slideshow: Amazon's Case For Enterprise Cloud Computing
Slideshow: Amazon's Case For Enterprise Cloud Computing (click image for larger view and for full slideshow)
Eight days after Amazon Web Services (AWS) experienced a major multi-day service outage in its East Coast region, the on-demand computing infrastructure company has published a detailed post-mortem and apologized.
The outage, AWS said, was triggered by "a network configuration change," which presumably means a specific manual mistake during some network adjustment. Human error, in other words.
The configuration change represented an effort to upgrade network capacity and involved a shift of network traffic off of one of the redundant routers in the primary Elastic Block Store (EBS). That shift, AWS explained, "was executed incorrectly." This left EBS volumes unable to find places to replicate their data and created a kind of endless loop, a "re-mirroring storm" as AWS put it.
For its failure, AWS apologized. "We know how critical our services are to our customers' businesses and we will do everything we can to learn from this event and use it to drive improvement across our services," the company said.
It also acknowledged that communication about such incidents can be improved, and promised to address communication gaps through increased staffing and better tools.
AWS has decided to award a 10-day service credit to all customers using EBS or the Amazon Relational Database Service (RDS) in the affected region, whether their operations were interrupted or not.
The outage affected a subset of customers using Amazon Elastic Compute Cloud (EC2, which provides on-demand computational processing), specifically those using certain Amazon EBS volumes associated with a group of data centers referred to as US East. Some customers using the RDS were also affected because RDS relies on EBS to store log files and database files.
Service problems began early Thursday, April 21, and were resolved by Monday, April 25, except for the data loss: On Monday, AWS acknowledged that 0.07% of the EBS volumes in the US East region were unrecoverable.
The outage slowed or shut down a significant number of prominent Internet businesses, including Engine Yard, Foursquare, Hootsuite, Heroku, Quora, and Reddit. Beyond that, it renewed doubts about the viability of cloud computing among skeptics.
Among those already sold on the cloud, the incident at least forced a re-evaluation of the risks of outsourced infrastructure and prompted further thought about disaster recovery planning.
In a blog post on Friday, Gartner Research VP Andrea Di Maio suggested that the outage makes clear that customers have to plan for imperfection. "While it is important to maintain pressure on service providers to improve their reliability footprint, the onus of developing or contracting reliable system stays with their clients, and there won't be any miraculous cloud that provides 100% uptime or that does not risk to fail meeting its own SLAs," he wrote.
About the Author
You May Also Like
2024 InformationWeek US IT Salary Report
May 29, 20242022 State of ITOps and SecOps
Jun 21, 2022