Google cloud architect Urs Holzle has a plan for more robust cloud services to compete with Amazon Web Services. But Google's cloud to-do list remains long.
8 Data Centers For Cloud's Toughest Jobs
(Click image for larger view and slideshow.)
Google plans to make a major announcement Tuesday on how it will expand its cloud services to compete more aggressively with Amazon Web Services. But if Google is serious about this strategy, the company must change a number of things about its approach to cloud computing.
Google's chief cloud architect and senior VP of technical infrastructure Urs Holzle will give a keynote at Google's Cloud Platform meetup in San Francisco Tuesday morning, and he will discuss Google's introduction of new features to its App Engine and Compute Cloud. When it comes to customer choice and services, Google is not yet in the same league as AWS.
Google senior VP of technical infrastructure Urs Holzle.
If anyone can convince onlookers that Google is serious about becoming a major cloud contender, it's Holzle. A former University of California Santa Barbara computer science professor who enlisted in Google's early search engine days to create a more cloud-oriented infrastructure, Holzle redesigned Google's approach to computing from the ground up, coming up with a leaner server design, more efficient air flow cooling, and more accessible, maintainable motherboards and server parts. Under his leadership, Google evolved from a cluster of a hundred servers in Google's headquarters in Mountain View, Calif., to dozens of data centers around the world.
When Steve Ballmer declared last July that Microsoft was running a million servers in its infrastructure, he also acknowledged that Google was running even more than that. Google search and Google cloud services are powered from the same infrastructure.
Cloud service providers need to stop thinking of server clusters, server farms, and other large aggregations of computers as oddities or exceptions to the general pattern of computing. In the cloud, Holzle said, the datacenter is the computer, and cloud software runs across all its servers to keep them up and running -- even when a piece of hardware fails underneath it.
Google launched App Engine, its platform as a service (PaaS) in 2008. It introduced its infrastructure as a service (IaaS) in beta form in June 2012, making it generally available last December.
In comparison, Amazon's beta IaaS launched in 2006, six years earlier than Google, and since then the company has become host to such independent PaaS suppliers as Engine Yard, Heroku, and Pivotal's Cloud Foundry.
In order for Google to catch up with Amazon on the general-purpose cloud computing front, it must escape the limits it set on its App Engine PaaS, which came out of the gate running Google's favorite language -- Python -- and not much else. Google later added Java and the languages that run in the Java Virtual Machine. Google needs to broaden App Engine's capabilities without sacrificing what the company has described as its inherent performance advantage.
A recent Wired magazine article indicated that Google will apply the best of both of its existing worlds -- App Engine PaaS and Compute Engine IaaS -- in its new service offerings. Does that mean it will make PaaS a feature of a new and aggressively positioned infrastructure as a service instead of offering two separate services?
When it comes to asking Google for something new, however, we should be careful what we wish for. If Google attempts to combine IaaS and a more broadly conceived PaaS, it may risk losing one of the factors that currently distinguishes its platform: The ability to deliver consistent performance. A more complex PaaS, especially as a feature of IaaS, might make performance levels harder to maintain. To keep performance an element of differentiation, Google needs to do both.
Google has done an impressive job of building some services into its basic service. Compute Engine will automatically scale a spike in traffic across a customer's total server set without requiring a request from the customer to prepare for load balancing. Amazon customers have complained in the past that Amazon's Elastic Load Balancer, while effective, requires advance notice and time to fire up, therefore reacting belatedly to demand rather than being on top of it. On other fronts, however, Google simply lacks Amazon's wide breadth of services.
For example, Amazon's deeper management interfaces include Cloud Formation for provisioning a broad set of services at one time, OpsWorks for application management, and Elastic Beanstalk for deployment management. Google must come up with a richer management interface and view into running workloads.
Both Amazon and Google offer services to manage relational databases and big data. But Amazon understands the rapidly evolving nature of data management in the cloud, and also offers the RedShift data warehouse service, the EMR Hadoop hosted framework, Data Pipeline for orchestrating data-driven workflows, and Kinesis for real-time data stream processing. With the Internet of Things poised to increase the need for such services exponentially, Google must demonstrate more data-handling skills than it has shown to date.
On a broader front, Google has authored and developed several key cloud technologies used by most service providers today, including the specially designed cloud hardware MapReduce parallel data distribution and the BigTable NoSQL big data handling system. But it's never quite mastered the scaled-back view of the world sometimes needed to connect with enterprise IT. While the company offers features that appeal to IT, such as automated encryption of data at rest, it still needs to figure out how to build more services that will help it gain traction in the enterprise.
Amazon has enough insight into the problems of traditional IT to innovate on that front with Amazon Advisor, for help in configuring cloud virtual machines; Virtual Private Cloud, for more isolated operations in the multi-tenant cloud; and Direct Connect, for private-line access to Amazon cloud data centers. Microsoft is also learning to talk to enterprise cloud customers, offering services such as compatible SQL Server in Azure instead of only Azure SQL Services.
Can Google convince enterprise IT that it's serious about supplying IaaS that serves its needs? That's the key question Holzle must address. One way he might do this is to emphasize a Google strength in enterprise terms: Google had the foresight to acquire dark fiber when it was available, and its data centers are now believed to be linked by surplus fiber capacity. If Google could introduce a simple, automated data backup and recovery service that allows customers to designate where they want the data recovered, that would provide a competitive advantage for enterprise users.
However Holzle plays his cards, cloud computing users are likely to see a spokesman for a powerful new service with a more generalized, accessible infrastructure than the Google we've seen so far. Google App Engine and Google Compute Engine may be solid individual services, but a single unified Google Cloud Platform would be easier to understand and use -- and it may well be the service that Holzle unveils on Tuesday.
Engage with Oracle president Mark Hurd, NFL CIO Michelle McKenna-Doyle, General Motors CIO Randy Mott, Box founder Aaron Levie, UPMC CIO Dan Drawbaugh, GE Power CIO Jim Fowler, and other leaders of the Digital Business movement at the InformationWeek Conference and Elite 100 Awards Ceremony, to be held in conjunction with Interop in Las Vegas, March 31 to April 1, 2014. See the full agenda here.
Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.