The Cloud - All It's Cracked Up To Be?
I definitely agree that "cloud" security is moving at a fast pace, perhaps since that supports the brunt of user activity these days, whether it is collaborative development teams interacting with their source version control repositories or mobile apps. However, we have to be careful not to think there is only one technology platform on which to do cloud computing in order to maintain that pace - security should cross architecture boundaries as easily as the cyber criminals that attach them do. GNU/Linux, NoSQL and even the hardware running cloud computing systems could all exit stage left when the next big thing comes along. True, containers are hot - smoking hot - and fun to build and deploy. But should container use be the only game in town toward entering the cloud?
The point of securing one type of system and then replicating that secured type hundreds or thousands of times sounds appealing at first. You could argue that that represents a "better" type of security than current Enterprise data security. But taking that to the next logical step, someone out there _is_ going to find the Achilles heel in container architecture - perhaps coming from an angle not previously considered, as the cloud as an overall ecosystem is more complex than just plugging in a secure container - and then you will effectively have done a great job lining up the dominoes. I don't know that it makes sense to do a one-to-one comparison between "the cloud" as a single complex system and to any one of thousands of single complex systems in the form of various Enterprise servers in that ecosystem. If only from having been alive a while, I believe these "little holes" can appear just as easily in the cloud as on any given complex system.
All that said (and apologies for sounding critical of Hoelzle - I'm a mere college dropout whose focus since the 90s has almost entirely been on GNU/Linux builds and Emacs Lisp deployments), once you start talking about "encryption-on-the-fly" and "scanning systems" you are now getting my head to nod. Data and the network on which it is transported is everything. The function of mapping your ecosystem (I assume within a simulation - complex ecosystems should never simply be documented without a live simulation that can be updated with real-time changes pulled from results of scanning) should be project "day 1" discussion material and always be developed through to Production Go Live.
In terms of exploits, to suggest that software changes that occur behind APIs must be invisible to cyber criminals is a tad optimistic, however. Using Google stats to support this idea and the overall security of the Google architecture both 1) doesn't help other companies who are developing their own cloud ecosystem and may want to divert entirely for Google-style architecture, and also begs the question of what incidents have already occurred, what exploits may exist that have not yet made their way to the boards, that we aren't currently aware of? How many technologies have presented with similar assurances and have later been opened up and shown to have some vulnerability?
As I said, I'm nobody - a worker bee in the bowels of a digital domain. But I know better than to ever take anything I put together for granted, based upon it's architecture or that of the system and network it sits on. There is always a hole and there is always someone who can find it, exploit it and brings decades of work to the ground. To suggest a sensitive enterprise system can gain "compliance" simply by being migrated to the cloud is, well, reckless. No, I understand that this is all in the context of Google and that, whomever this auditor is referenced in the keynote, was probably doing the simple math that formal auditing standards have today, but that just doesn't sound like good security to me.
I will say that I agree that the end user should have the same transparent experience regardless how the application they access is served up. Or from where. And the "where" is why I feel slightly nervous in reading this - the network is everything. The age of the datacenter is slowly coming to a close, and distributed computing - the natural successor to the datacenter - is rapidly growing across the globe. Security must never be seen as a one-time step, especially at the application level, and the network should also always be seen as a living thing, changing every second and at risk every second.
On another note, Urs Hölzle has talked at length about the adoption of open-source infrastructure technology OpenFlow in an article from Wired  - VERY, VERY COOL material and ideas.
 Wired, April 2012, Going With the Flow: Google's Secret Switch to the Next Wave of Networking by Steven Levy