Google Adds Cloud Infrastructure Muscle Vs. Amazon
Google Compute Engine's first major upgrade adds 36 server instances to its cloud catalog, cuts prices to become more competitive.
7 Cheap Cloud Storage Options
(click image for larger view and for slideshow)
In a bid to capture more cloud customers, Google has reduced prices on its existing virtual servers by 5% and added 36 selections to the four previously available in its Compute Engine server catalog.
Google seeks to put more muscle behind its infrastructure-as-a-service offerings since Compute Engine was first announced as "a limited preview offering" during the Google I/O 2012 show in June. It's still in limited preview, i.e., a beta test through customers, with no date in sight for when it will become a generally available product, said Shailesh Rao, director of new products and solutions in the Google Enterprise unit.
But Rao added that Google is seeing heavy signup by Silicon Valley startups, who are often next door to the Mountain View, Calif., Googleplex. Compute Engine has gotten good reviews on the ease and speed with which it launches a virtual server.
"We have a long ways to go, but we're excited about what we've been able to do so far," said Rao in an interview. Another source of customers is existing companies looking to build a new application and run it in a cloud data center, he added.
Rao said Google seldom has to sell customers on the reliability of its infrastructure since it's the same data centers servers that run Google search. "We've built a beautiful house over the last 14 years. Now we want people to come and stay as long as they want," he said.
In the past, customers seeking to make application services available to people in Europe had to be Google premier support customers. Now standard Compute Engine users may deploy workloads to either North America or Europe.
The new server configurations come much closer to matching the wide variety of options found on Amazon Web Services, with virtual machines with more CPU power and larger random access memory. Google's previous entry level -- a "standard" virtual server with one "core" (equal to half a 2011 Intel Sandy Bridge CPU core), plus 3.75 GB of RAM and 420 GB of disk space -- was priced at $0.145 an hour. With the price reduction, it's now, $0.138 an hour.
Rao said Google is trying to be competitive in its pricing, which appears to position a slightly heftier virtual server next to a similar Amazon offering at a slightly lower price.
Added to the standard offering, however, are some wide-ranging variations. For example, Google lists a maximum HighCPU server with disk as an eight-core, 7.2-GB RAM and 3,540 GB of disk space at $0.68 per hour. It also offers a maximum HighMem server with the same dimensions, except it has 52 GB of RAM, priced at $1.272 per hour. Prices are slightly higher in European deployments.
Google has been competitive in cloud storage, with pricing formerly at $0.12 for the first terabyte, now reduced 20% to $0.095 per terabyte. It's also introducing a "durable reduced availability" storage option that is 30% less expensive than its previous prices. The "reduced availability" means it may function much like standard storage, or in periods of high traffic it will be given a lower priority and respond with longer latencies than before, Rao noted.
Rao said Google offerings include the strong networking characteristics of Google data centers that can deliver sub-second search results.
Google is also adding a persistent disk snapshotting service, with snapshots sent to a customer-designated backup location, if desired.
Rao acknowledged that Google was unlikely to win existing Amazon customers with "limited preview" services. But he said Google was going after the startup market. "We are a very good option for them, and for new people moving to the cloud," he said.
Multicloud Infrastructure & Application ManagementEnterprise cloud adoption has evolved to the point where hybrid public/private cloud designs and use of multiple providers is common. Who among us has mastered provisioning resources in different clouds; allocating the right resources to each application; assigning applications to the "best" cloud provider based on performance or reliability requirements.