Google's Urs Hoelzle: Cloud Will Soon Be More Secure - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Software // Enterprise Applications
News
4/30/2015
12:57 PM
Connect Directly
Twitter
RSS
E-Mail
100%
0%

Google's Urs Hoelzle: Cloud Will Soon Be More Secure

Google's chief data center architect, Urs Hoelzle, says cloud security will improve faster than enterprise security in the next few years.

learned from its mistakes, "I'd say our track record has been very good."

Over the next few years, sensitive enterprise systems may more easily prove to be in compliance if they're running in the cloud, as opposed to on-premises, he asserted in his keynote. He cited a mock auditor's report that said a given enterprise system would have been clearly in compliance on the Google Platform, but "since you're running it yourself, I will have to flag it as a system at risk."

Google remains one of the world's largest users of Linux containers. Google software engineers were instrumental in producing Linux control groups -- some of the building blocks of containers that can keep one application from stepping on the toes of another on the same server.

Google has made its container-scheduling and deployment system, Kubernetes, into an open-source code project that will be able to manage Docker containers.

Hoelzle said an enterprise that adopts Kubernetes for container management on-premises will have no problem moving workloads running in it to the Google Cloud Platform, where containers are managed by Google's own Kubernetes Container Engine, an advanced version over the open source code.

"The configuration files, the packaging files -- the bits are exactly the same," whether a workload is running in a container under Kubernetes on-premises or on the Google App Engine/Compute Engine platform, Hoelzle said in an interview.

Furthermore, enterprises don't need to wait for the container picture to sort itself out further, in Hoelzle's view. "Kubernetes today is pretty functional. It doesn't have all the features we have, such as how to pack the best concentration of containers on a server, but it doesn't need it."

He doesn't view containers and virtual machines as being in an inevitable competition for the same workloads. Each will play a role where appropriate, but he also thinks containers are likely to run a larger share of cloud workloads in the future than they do today.

But clouds that make deft use of containers will have advantages over those that don't, he predicted. For example, a workload at rest, with no requests coming in to it at night, can be stored in a container and incur no charges in the Google platform infrastructure. If requests do show up, the workload is quick to instantiate because no operating system needs to be launched to get it running. It uses the host's.

Such deployment decisions could easily be left to the discretion of the service provider, with the customer not knowing whether his workload was running in virtual machine, a container, or temporarily stored at rest.

Such a scenario is more difficult to achieve with virtual machines, because the virtual machine's operating system must be retrieved and fired up before anything else can happen. On the other hand, virtual machines with their operating systems and harder logical boundaries are more adept at migrating live from one server to another.

"The customer shouldn't need to care" whether his application is running in a container or virtual machine, Hoelzle noted. Rather, the appropriate vehicle at the appropriate time should be available through the cloud vendor to him.

When it is, however, he's likely to notice a difference between the Google and Amazon Web Services platforms. A task running, when needed on Google Compute Engine, and at rest in a container at other times, is a better deal than an application running in a virtual machine, 24 hours a day. That is particularly true when in the Google instance the charge is incurred by the minute, while in the AWS case the bill is compiled by the hour. The AWS billing mechanism rounds up to the next hour after the first 15 minutes of a new hour are completed

On other issues, Google came out from behind what was at one time a reluctance to disclose details about its infrastructure. Among the most significant, Hoelzle said, is that Google data centers have implemented so many energy and components updates that it does 3.5 times the computing for the energy used than it did five years ago.

All Google data centers combined operate with an average 1.12 power usage effectiveness rating, said Hoelzle. Google calculates a "pessimistic," or highly inclusive, set of factors to calculate PUE, which include the substation if it's on the same grounds as the data center. PUE is the amount of electricity consumed strictly for computing by servers and other equipment, compared to the amount brought into the data center. A rating of 1.0 would indicate all power was being used for compute.

The typical enterprise data center has a rating between 1.8 and 2. Facebook claims a PUE of 1.06 for its Prineville, Ore., facility.

Hoelzle had reservations about the effectiveness of PUE as the sole measure of efficiency, but said its progression downward is an indicator of how much progress has been made in a five-year period.

In addition, he named the Google data center in a small town east of Brussels as the first data center in the world that ran without air conditioners, or "chillers" in data center parlance. Instead, it used water extracted from a nearby canal that was filtered, and then drizzled down a knobby screen with ambient air flowing over it. The air was cooled about by 15 degrees by the evaporation of water as it passed over the screen, and then used to cool the data center.

On the coast of Finland, a former paper mill serves as the first data center that's cooled by seawater, he said. The mill employs a tunnel "large enough to drive a truck through" to bring seawater into the building. Google uses the water from the channel in a heat exchanger that cools water piped to the computer systems, in order to cool them.

Hoelzle also showed a picture of the servers that Google first built itself as a cost-saving measure on a corkboard. One of the first corkboard servers from Google is in the computer history museum in Mountain View, Calif. Google stripped redundant fans, power supplies, and other components out of its home-built serves to reduce their cost. It had designed cloud software that tolerated the failure of a server underneath it, and migrated its workloads to another server.

A Google facilities engineer has designed a system that uses machine learning to coordinate all the cooling mechanisms of a data center based on conditions inside and outside. It can determine which variable-speed pumps need to bring more cooling water to the heat exchanger, or it can dictate which fans should work a little harder to keep the temperature at an acceptable level. The system has been applied to about half of Google's data centers and will be in all of them by the end of the year, he said.

Hoelzle is co-author of he pioneering cloud arhitecture book, The Data Center As The Computer.

Interop Las Vegas, taking place April 27-May 1 at Mandalay Bay Resort, is the leading independent technology conference and expo series dedicated to providing technology professionals the unbiased information they need to thrive as new technologies transform the enterprise. IT Pros come to Interop to see the future of technology, the outlook for IT, and the possibilities of what it means to be in IT.

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive ... View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Previous
2 of 2
Next
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Page 1 / 2   >   >>
MetricStream
50%
50%
MetricStream,
User Rank: Apprentice
5/26/2015 | 10:27:41 AM
Vibhav Agarwal, Senior Manager of Product Marketing at MetricStream
One of the most interesting points that stands out to me is the fact that Mr. Hoelzle says that soon enterprise systems may more easily be in compliance with data regulations if they're running on the cloud. We know that transitioning to the cloud is becoming the new normal. Instead of enterprises continuously trying to re-build their Governance, Risk and Compliance (GRC) programs to keep pace with updated regulations and emerging cloud service models and technologies, they should adopt a layering approach, which allows organizations to create a single source of GRC truth across the enterprise. 
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
5/5/2015 | 9:22:28 PM
A uniform environment easier to protect
GAProgrammer, You have a point that hackers have only one environment to decipher, but I think that the notion that a confusing environment is also a protected environment has been discredited. Better to know what you've got and take the best measures to shield it than to have intruders slipping in through the side door. 
GAProgrammer
50%
50%
GAProgrammer,
User Rank: Ninja
5/4/2015 | 10:25:32 AM
The homogeneous nature is also the Achille's heel
as in, hackers only have one target to focus on. Once they break that, they have the keys to the entire kingdom. Except now, they don't have control of one system(instance) - they now have access to the computing power of hundreds or thousands of instances. Imagine the havoc THAT could create.
Li Tan
50%
50%
Li Tan,
User Rank: Ninja
5/2/2015 | 12:55:47 AM
Re: A leadng Google thinker, writer
You should never trust any disclaimer that something is secure - as long as it's a service offered to public or a group of people, it cannot be really secured all the time.
laredoflash
50%
50%
laredoflash,
User Rank: Strategist
5/1/2015 | 9:06:38 PM
Re: A leadng Google thinker, writer
The cloud will never be secure. Got that. The cloud is equal to another Wall Street meltdown. There is no way to secure something that everyone can see. Every couple of years we go through the same old BS. Wall Street with their melt down, and IT with their black hole. No more lies please. Master Rod
laredoflash
100%
0%
laredoflash,
User Rank: Strategist
5/1/2015 | 9:06:17 PM
Re: A leadng Google thinker, writer
The cloud will never be secure. Got that. The cloud is equal to another Wall Street meltdown. There is no way to secure something that everyone can see. Every couple of years we go through the same old BS. Wall Street with their melt down, and IT with their black hole. No more lies please. Master Rod
RetiredUser
50%
50%
RetiredUser,
User Rank: Strategist
5/1/2015 | 10:29:44 AM
Re: What if Target had been on Amazon?
I know, it's hard to argue alternatives when of your two choices one has some inherent architecture that allows for such detection (cloud) and the other must be purposefully configured to monitor for events like the Target hack.

But we must have alternatives, I believe, as businesses.  Perhaps it is too early to go there, and businesses need to be completely responsible in implementing whatever that alternative is for industry to accept alternatives to the cloud as a sign of security compliance.

Yes, you're right - here and now, had Target been on a cloud service, this most likely would have ended much better than it did.  
nasimson
50%
50%
nasimson,
User Rank: Ninja
4/30/2015 | 10:57:15 PM
an aspect that is perceived the weakest.
Urs has been a thought leader and a practitoner at the cutting edge right from 2004 when he did his PhD from Stanford. So its more than a decade of expertise.

What a confidence he's insipiring in the cloud security - an aspect that is perceived the weakest.
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
4/30/2015 | 5:51:54 PM
What if Target had been on Amazon?
Christian, Your points are well taken and things will play out as you describe in many enterprise data centers. But take Target, for example. Target doesn't believe in using the Amazon cloud, I just heard in the session that I"m attending at Cloud Connect/Interop. But what if Target by some stetch of the imagination had been operating on Amazon infrastructure. If it were, I suspect, someone there would have noticed the unusual pattern of milions of credit card numbers being streamed out the door to an address in Russia.
RetiredUser
50%
50%
RetiredUser,
User Rank: Strategist
4/30/2015 | 3:55:57 PM
The Cloud - All It's Cracked Up To Be?
I definitely agree that "cloud" security is moving at a fast pace, perhaps since that supports the brunt of user activity these days, whether it is collaborative development teams interacting with their source version control repositories or mobile apps.  However, we have to be careful not to think there is only one technology platform on which to do cloud computing in order to maintain that pace - security should cross architecture boundaries as easily as the cyber criminals that attach them do.  GNU/Linux, NoSQL and even the hardware running cloud computing systems could all exit stage left when the next big thing comes along.  True, containers are hot - smoking hot - and fun to build and deploy.  But should container use be the only game in town toward entering the cloud?  

The point of securing one type of system and then replicating that secured type hundreds or thousands of times sounds appealing at first.  You could argue that that represents a "better" type of security than current Enterprise data security.  But taking that to the next logical step, someone out there _is_ going to find the Achilles heel in container architecture - perhaps coming from an angle not previously considered, as the cloud as an overall ecosystem is more complex than just plugging in a secure container - and then you will effectively have done a great job lining up the dominoes.  I don't know that it makes sense to do a one-to-one comparison between "the cloud" as a single complex system and to any one of thousands of single complex systems in the form of various Enterprise servers in that ecosystem.  If only from having been alive a while, I believe these "little holes" can appear just as easily in the cloud as on any given complex system. 

All that said (and apologies for sounding critical of Hoelzle - I'm a mere college dropout whose focus since the 90s has almost entirely been on GNU/Linux builds and Emacs Lisp deployments), once you start talking about "encryption-on-the-fly" and "scanning systems" you are now getting my head to nod.  Data and the network on which it is transported is everything.  The function of mapping your ecosystem (I assume within a simulation - complex ecosystems should never simply be documented without a live simulation that can be updated with real-time changes pulled from results of scanning) should be project "day 1" discussion material and always be developed through to Production Go Live.       

In terms of exploits, to suggest that software changes that occur behind APIs must be invisible to cyber criminals is a tad optimistic, however.  Using Google stats to support this idea and the overall security of the Google architecture both 1) doesn't help other companies who are developing their own cloud ecosystem and may want to divert entirely for Google-style architecture, and also begs the question of what incidents have already occurred, what exploits may exist that have not yet made their way to the boards, that we aren't currently aware of?  How many technologies have presented with similar assurances and have later been opened up and shown to have some vulnerability? 

As I said, I'm nobody - a worker bee in the bowels of a digital domain.  But I know better than to ever take anything I put together for granted, based upon it's architecture or that of the system and network it sits on.  There is always a hole and there is always someone who can find it, exploit it and brings decades of work to the ground.  To suggest a sensitive enterprise system can gain "compliance" simply by being migrated to the cloud is, well, reckless.  No, I understand that this is all in the context of Google and that, whomever this auditor is referenced in the keynote, was probably doing the simple math that formal auditing standards have today, but that just doesn't sound like good security to me. 

I will say that I agree that the end user should have the same transparent experience regardless how the application they access is served up.  Or from where.  And the "where" is why I feel slightly nervous in reading this - the network is everything.  The age of the datacenter is slowly coming to a close, and distributed computing - the natural successor to the datacenter - is rapidly growing across the globe.  Security must never be seen as a one-time step, especially at the application level, and the network should also always be seen as a living thing, changing every second and at risk every second. 

On another note, Urs Hölzle has talked at length about the adoption of open-source infrastructure technology OpenFlow in an article from Wired [0] - VERY, VERY COOL material and ideas. 

[0] Wired, April 2012, Going With the Flow: Google's Secret Switch to the Next Wave of Networking by Steven Levy
Page 1 / 2   >   >>
Slideshows
10 RPA Vendors to Watch
Jessica Davis, Senior Editor, Enterprise Apps,  8/20/2019
Commentary
Enterprise Guide to Digital Transformation
Cathleen Gagne, Managing Editor, InformationWeek,  8/13/2019
Slideshows
IT Careers: How to Get a Job as a Site Reliability Engineer
Cynthia Harvey, Freelance Journalist, InformationWeek,  7/31/2019
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
Data Science and AI in the Fast Lane
This IT Trend Report will help you gain insight into how quickly and dramatically data science is influencing how enterprises are managed and where they will derive business success. Read the report today!
Slideshows
Flash Poll