Google reduces per-gigabyte pricing for Cloud Datastore by 25%, reflecting falling cloud storage prices; read operations are cheaper, too.
8 Great Cloud Storage Services
(click image for larger view and for slideshow)
Google has reduced the price on its Google Cloud Datastore, introduced a week ago at its Google I/O conference in San Francisco, from 24 cents per gigabyte per month to 18 cents a GB per month, or 25%.
Google Cloud Datastore is unstructured data storage similar to Amazon Web Service's Simple Storage Service or S3. By quickly reducing its price, Google is encouraging greater use of Google Cloud Datastore for object storage purposes. Customers may choose to use it as a standalone service or use it in connection with their Google App Engine and Google Compute Engine operations. Google manages the data storage system; it's only available as a cloud service.
Google announced the price reduction in a post to its Cloud Platform blog Thursday. Google Cloud Datastore is an offshoot of the Google App Engine's High Replication Datastore system, established in 2011, which now processes 4.5 trillion transactions per month, engineering director Peter Magnusson wrote in the post. It is a highly scalable, reliable system of the type that has made Google cloud architecture famous. It has a 99.95% uptime record, Magnusson said.
Although similar to Amazon Web Services S3, Google Cloud Datastore is something less than a direct competitor, even with the price reduction. The reduction probably has more to do with keeping existing Google App Engine and Compute Engine users attached to Google services than it does with competing directly in the larger field. Compared to Google's Datastore, Amazon's S3 standard storage is 9.5 cents per GB per month for up to one terabyte of object storage. It also comes in Reduced Redundancy (7.6 cents per GB) and Glacier (1 cent per GB) forms.
Like the High Replication Engine system from which it sprang, Google Cloud Datastore is based on coordinated copies of replicated data, ensuring that a copy with data integrity survives, regardless of a hardware or system failure. High Replication Engine uses the well-established Paxos algorithm to achieve the survivability characteristic. The objects stored may be of different data types, as opposed to more structured data.
Read operations under the new pricing structure went from 7 cents per 100,000 operations to 6 cents per 100,000 operations. Write pricing went from 10 cents per 100,000 operations to 9 cents per 100,000.
Multicloud Infrastructure & Application ManagementEnterprise cloud adoption has evolved to the point where hybrid public/private cloud designs and use of multiple providers is common. Who among us has mastered provisioning resources in different clouds; allocating the right resources to each application; assigning applications to the "best" cloud provider based on performance or reliability requirements.
Join us for a roundup of the top stories on InformationWeek.com for the week of April 24, 2016. We'll be talking with the InformationWeek.com editors and correspondents who brought you the top stories of the week!