Amazon S3 Slowed By Software 'Misconfiguration'

Mistake in S3 traffic system configuration produced severe latencies and error rates for AWS S3 users early Monday morning.

Charles Babcock, Editor at Large, Cloud

August 11, 2015

2 Min Read
<p align="left">(Image: soleg/iStockphoto)</p>

Cloud Computing: 8 Hidden Costs

Cloud Computing: 8 Hidden Costs


Cloud Computing: 8 Hidden Costs (Click image for larger view and slideshow.)

Amazon Web Services reported a rare instance of a slowdown and poor performance of its S3 storage service in the early morning hours of Monday, Aug. 10. After initially applying the wrong fix, AWS backtracked and corrected the problem, getting S3 back in operational shape before 7 a.m. Monday Eastern time.

Amazon's Service Health Dashboard reported at 3:36 a.m. Eastern time that it was investigating an elevated level of error rates on requests for service to S3, and 24 minutes later it reported it was trying to determine the root cause. The errors were occurring in what it termed the US Standard region, which it didn't define, but included its most heavily trafficked site, US East in Northern Virginia.

At 4:52 a.m. Eastern the Amazon dashboard reported it was "actively working on the recovery process, focusing on multiple steps in parallel." Customers could expect to continue to see elevated rates of errors and wait times as they attempted to use the service, Amazon reported.

[Want to learn more about what was behind another recent outage? See AWS Outage Traced To Route Leak.]

At some point, AWS realized it had identified the wrong root cause and backtracked to reassess and follow a different recovery route. The S3 slowdown was "due to a configuration error in one of the systems that Amazon S3 uses to manage request traffic," it reported.

After identifying the correct cause, AWS began to report recovery from error rates and latencies at 6:36 a.m. Eastern. By 6:46 a.m. it was able to report that the system was operating normally.

"We pursued the wrong root cause initially, which prompted us to try restorative actions that didn't solve the issue. Once we understood the real root cause, we resolved the issue relatively quickly and restored normal operations," the Service Health Dashboard reported.

The slowdown affected other services that depend on S3, including Elastic MapReduce and call-up of customers' Amazon Machine Images out of storage for use in the early morning hours, the Service Health Dashboard said.

About the Author

Charles Babcock

Editor at Large, Cloud

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive Week. He is a graduate of Syracuse University where he obtained a bachelor's degree in journalism. He joined the publication in 2003.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights