"You can dial up database capacity without thinking about it," said Werner Vogels, CTO of AWS, in a 9 a.m. webcast today. "It's not database software. It's a database service," one that can perform "consistently fast," he said.
Anyone creating an account for the service can establish a database table with the data stored on solid-state disks (SSDs). That gives the service a high degree of performance predictability by eliminating calls to rotating disks. "Customers can typically achieve average service-side in the single-digit milliseconds," Vogels wrote in an early morning post to his All Things Distributed blog.
The data is spread across multiple availability zones, or data center units with independent networking and power supplies.
[ Big-data handling is a necessary part of the data center. Read more at Three BI Trends Every CIO Must Understand. ]
The customer tells DynamoDB through the AWS management console how many requests it will see per second, then AWS spreads the database table across a sufficient number of servers to provide it, Vogels said during the webcast. If unexpected traffic appears, the customer may dial up more requested capacity at the console.
One beta customer requested 250,000 writes to the system per second, and used it over a three-day period. "It was great testimony to our scalability and throughput," Vogels said.
DynamoDB will give customers the ability to make a performance-vs.-consistency tradeoff. Customers automatically will get high performance with "eventual" consistency, or relational database type of assurance of the same answer to the same question all the time. Eventual consistency is often used in situations in which the precision of the answer is less important than the speed of response. For instance, a game player, asking how many fellow players are available, might be satisfied by with an answer that is off by a few players, even if a precise answer were available several seconds later.
DynamoDB is added to Amazon's SimpleDB database service and its Relational Database Service. It has been integrated with another existing service, Elastic Map Reduce, AWS's implementation of the Hadoop big data distribution and sorting system.
Dynamo was the name Amazon engineers gave to an early NoSQL system they built to cope with the fluctuation in holiday shopping traffic. The installed commercial relational database system was proving inadequate to the scale of the task.
Dynamo, along with Google's BigTable, became the prototype for a number of followup NoSQL systems, including Cassandra and Riak. "A number of outages at the height of the 2004 holiday shopping season can be traced back to scaling commercial technologies (on Amazon's e-commerce systems) beyond their boundaries," wrote Vogels in his blog. Big data handling had been born, and now DynamoDB combines features of relational database and NoSQL technologies in a cloud service.
More than 700 IT pros gave us an earful on database licensing, performance, NoSQL, and more. That story and more--including a look at transitioning to Win 8--in the new all-digital Database Discontent issue of InformationWeek. (Free registration required.)