Google reduces per-gigabyte pricing for Cloud Datastore by 25%, reflecting falling cloud storage prices; read operations are cheaper, too.
8 Great Cloud Storage Services
(click image for larger view and for slideshow)
Google has reduced the price on its Google Cloud Datastore, introduced a week ago at its Google I/O conference in San Francisco, from 24 cents per gigabyte per month to 18 cents a GB per month, or 25%.
Google Cloud Datastore is unstructured data storage similar to Amazon Web Service's Simple Storage Service or S3. By quickly reducing its price, Google is encouraging greater use of Google Cloud Datastore for object storage purposes. Customers may choose to use it as a standalone service or use it in connection with their Google App Engine and Google Compute Engine operations. Google manages the data storage system; it's only available as a cloud service.
Google announced the price reduction in a post to its Cloud Platform blog Thursday. Google Cloud Datastore is an offshoot of the Google App Engine's High Replication Datastore system, established in 2011, which now processes 4.5 trillion transactions per month, engineering director Peter Magnusson wrote in the post. It is a highly scalable, reliable system of the type that has made Google cloud architecture famous. It has a 99.95% uptime record, Magnusson said.
Although similar to Amazon Web Services S3, Google Cloud Datastore is something less than a direct competitor, even with the price reduction. The reduction probably has more to do with keeping existing Google App Engine and Compute Engine users attached to Google services than it does with competing directly in the larger field. Compared to Google's Datastore, Amazon's S3 standard storage is 9.5 cents per GB per month for up to one terabyte of object storage. It also comes in Reduced Redundancy (7.6 cents per GB) and Glacier (1 cent per GB) forms.
Like the High Replication Engine system from which it sprang, Google Cloud Datastore is based on coordinated copies of replicated data, ensuring that a copy with data integrity survives, regardless of a hardware or system failure. High Replication Engine uses the well-established Paxos algorithm to achieve the survivability characteristic. The objects stored may be of different data types, as opposed to more structured data.
Read operations under the new pricing structure went from 7 cents per 100,000 operations to 6 cents per 100,000 operations. Write pricing went from 10 cents per 100,000 operations to 9 cents per 100,000.
How Enterprises Are Attacking the IT Security EnterpriseTo learn more about what organizations are doing to tackle attacks and threats we surveyed a group of 300 IT and infosec professionals to find out what their biggest IT security challenges are and what they're doing to defend against today's threats. Download the report to see what they're saying.
2017 State of IT ReportIn today's technology-driven world, "innovation" has become a basic expectation. IT leaders are tasked with making technical magic, improving customer experience, and boosting the bottom line -- yet often without any increase to the IT budget. How are organizations striking the balance between new initiatives and cost control? Download our report to learn about the biggest challenges and how savvy IT executives are overcoming them.