Amazon S3 Slowed By Software 'Misconfiguration'
Mistake in S3 traffic system configuration produced severe latencies and error rates for AWS S3 users early Monday morning.
Cloud Computing: 8 Hidden Costs
Cloud Computing: 8 Hidden Costs (Click image for larger view and slideshow.)
Amazon Web Services reported a rare instance of a slowdown and poor performance of its S3 storage service in the early morning hours of Monday, Aug. 10. After initially applying the wrong fix, AWS backtracked and corrected the problem, getting S3 back in operational shape before 7 a.m. Monday Eastern time.
Amazon's Service Health Dashboard reported at 3:36 a.m. Eastern time that it was investigating an elevated level of error rates on requests for service to S3, and 24 minutes later it reported it was trying to determine the root cause. The errors were occurring in what it termed the US Standard region, which it didn't define, but included its most heavily trafficked site, US East in Northern Virginia.
At 4:52 a.m. Eastern the Amazon dashboard reported it was "actively working on the recovery process, focusing on multiple steps in parallel." Customers could expect to continue to see elevated rates of errors and wait times as they attempted to use the service, Amazon reported.
[Want to learn more about what was behind another recent outage? See AWS Outage Traced To Route Leak.]
At some point, AWS realized it had identified the wrong root cause and backtracked to reassess and follow a different recovery route. The S3 slowdown was "due to a configuration error in one of the systems that Amazon S3 uses to manage request traffic," it reported.
After identifying the correct cause, AWS began to report recovery from error rates and latencies at 6:36 a.m. Eastern. By 6:46 a.m. it was able to report that the system was operating normally.
"We pursued the wrong root cause initially, which prompted us to try restorative actions that didn't solve the issue. Once we understood the real root cause, we resolved the issue relatively quickly and restored normal operations," the Service Health Dashboard reported.
The slowdown affected other services that depend on S3, including Elastic MapReduce and call-up of customers' Amazon Machine Images out of storage for use in the early morning hours, the Service Health Dashboard said.
About the Author
You May Also Like
2024 InformationWeek US IT Salary Report
Aug 15, 20242024 InformationWeek US IT Salary Report
May 29, 20242022 State of ITOps and SecOps
Jun 21, 2022