Amazon's Vogels: Sensitive Data Vs. Snowden Fear

Amazon CTO Werner Vogels, at Structure 2014 conference, disputes notion that holders of sensitive data will reject cloud after Snowden revelations.

Charles Babcock, Editor at Large, Cloud

June 19, 2014

8 Min Read
Werner Vogels <br />(Source: <a href="" target="_blank">Guido van Nispen</a> on Flickr.)

6 Models Of The Modern Data Center

6 Models Of The Modern Data Center

6 Models Of The Modern Data Center (Click image for larger view and slideshow.)

Business, governments, and consumers are nervous about storing their data in the cloud due to the revelations of NSA snooping by Edward Snowden. This is especially true for Europeans, who have little appetite for making their data subject to US laws.

Werner Vogels, CTO of Amazon Web Services and a native of Amsterdam, the Netherlands, agreed with that assessment by Om Malik, chairman of technology news service GigaOm, on Wednesday. Malik interviewed Vogels at GigaOm's seventh annual Structure conference, held Wednesday and Thursday in San Francisco. But Vogels said the Snowden revelations hadn't dampened interest in Amazon's infrastructure-as-a-service.

"Our growth outside the US is as strong as ever," he told close to a thousand attendees at the conference. Amazon offers European customers the option of keeping their data in an Amazon cloud center based in Europe. It's also bringing increasingly sophisticated encryption and other ways of securing data.

Instead of trying to maintain their own security, people should turn to the cloud to store their most private and confidential data, including their private encryption keys, because it can be made secure there, he said in an unusual assertion of EC2's capabilities. In the past, companies have moved development and testing to the cloud for the agility provided there, he said. "AWS should be the place where you put the data you want to protect," he said.

"I see most of this as an opportunity, not as something that is really bad. It's an opportunity to give customers tools to protect themselves," assured Vogels.

[Want to learn more about the cloud's future? See Cloud: Ready To Burst.]

Vogels has been emboldened in his claims for cloud security since Amazon won a $600 million contract to build a cloud operation to be run privately by the CIA -- "cloud for members only," Vogels obliquely referred to it during his Structure appearance. The contract runs over a 10-year period, and Amazon's win survived an IBM protest and court challenge, decided in Amazon's favor last November.

Figure 1: Werner Vogels (Source: Guido van Nispen on Flickr.) Werner Vogels
(Source: Guido van Nispen on Flickr.)

"I've not yet seen a privacy requirement that can't be addressed by good architecture," he told the Structure conference.

Vogels said customer data integrity, privacy, and confidentiality was an area where cloud vendors could work more cooperatively to reduce fears that the cloud will always be insecure. "It's not a winner-take-all market," he insisted. "We can all work together to get more customers into the cloud," he said.

On a different issue, Diane Bryant, Intel's general manager of its data center group, told attendees that Intel was moving away from its orientation toward consumer parts -- chips for PCs and other consumer devices -- and beginning to produce custom chips for companies that need them by the thousands for servers in a specially designed cloud.

Intel has customized processors from its Xeon family for both eBay and Facebook and is willing to continue the practice for buyers who have large-volume orders to build out cloud data centers, she said. Intel doesn't start from scratch and design a processor to meet a customer's needs. Rather, it combines a Xeon with a field programmable gate array (FPGA), an integrated circuit that can be given instructions to perform algorithms desired by a particular customer.

Bryant said Intel has been working with cloud data center builders as a market segment since 2007, but she didn't say how long the specialized chip sets of Xeons and FPGAs have been produced. The FPGA is placed alongside a Xeon E5 processor, such as Ivy Bridge or Sandy Bridge, and embedded in a shared package. "The FPGA is married to a Xeon E5 chip ... The FPGA has direct access to the cache hierarchy and system memory of the Xeon CPU," said Bryant. The FPGA chip can then work in tandem with the Xeon, executing special algorithms "to deliver on-demand performance," she added.

The combination allows cloud builders to redesign their server racks, creating "pools of compute, memory, and storage" that can allocated and dynamically re-allocated to an application, depending on the traffic it's experiencing.

Increasingly sophisticated use of the specialized processors will give the cloud "its next big pop in efficiency," she predicted.

Telcos have many functions that they repeat millions of times a day that would benefit from being executed in silicon instead of software. Facebook might use such a specialized processor to tear down photos and store them in parallel data streams, or call them out of storage the same way.

Bryant conceded there was little role for such chips in the general-purpose data center. The "hybrid" chips are best used in Web-scale operations where a single application or a set of functions shared across a few applications is being used at very large scale.

The event also saw Jay Parikh, Facebook's VP of infrastructure, show off a "blue switch" built to Facebook's specifications and now being employed in Facebook data centers. The top-of-rack switch is designed to optimize

Next Page

traffic on the rack for the Facebook application and is the result of Facebook's Open Compute Project launched in 2011. Under Open Compute, hardware designs are treated as open source documents, and any data center builder may take the design to an original equipment manufacturer.

Facebook executives believe their data centers will benefit if they publish their server and switch designs and give more parties a stake in their use. The more participants in Open Compute, the more modifications and innovations it will see in the shared designs.

Facebook may be able to eventually use a set of switches customized for different tasks at the top of the rack. The blue switch is meant to support Facebook's core social networking system, known internally as "the big blue application." It has self-healing properties, as shown in a video where a wire cutter mysteriously appears from the edge of the screen to snip a cable attached to the switch. It circumvents the disabled processor and reroutes traffic to the remaining functioning ones.

The blue switch is another step in Facebook's "disaggregation" of the data center, or breaking it up into repeatable modules that can be scaled out horizontally. The practice gets away from proprietary hardware and software as much as possible. Facebook's ability to go deep into its own hardware and software stack has resulted in optimized operations that have saved $1.2 billion over the last two years, Parikh said.

Compute and storage have already been successfully modularized and made customizable, according to the needs of the cloud vendor, he noted, "but the network is the last piece... The network is the next place for us to be working together," he said.

Structure conference attendees also heard from Urs Holzle, Google's chief cloud architect, who confessed that early in his career at Google he would go home on Friday wondering if the search engine would have as much capacity as it needed on Monday. Its growth was so pronounced that "traffic ran very close to capacity," and Google facilities managers had trouble adding data center space fast enough. Google designed its own data centers and the servers that would occupy them, kicking off the arms race that eventually occurred between Facebook, Microsoft, Amazon, and other cloud data center builders. Each vied to produce the most efficient design for the power consumed in its own operations.

Holzle was limited in the early days by Google's $25 million second (and last) round of equity funding in 1999. As search started to generate revenues, his problems eased. "Today is so much easier. I don't have to worry about capital expenses," he said. Google got into designing its own servers "to save money," he noted. Its cloud architecture didn't need many of its redundant parts. The cloud management software detected a piece of failed hardware, resurrected its data somewhere else, and resumed operations.

Holzle confirmed that Google tasks, whether in its search engine and Maps operations or in the Google Compute Engine, don't run in virtual machines. They run in Linux containers, a point that has been recently emphasized by advocates of greater use of containers in the cloud.

"Since we control the entire software stack, we're not forced to use virtual machines," he said. Many Google functions run faster in Linux containers than they would in virtual machines because "they run closer to the bare metal," he said.

Microsoft's Scott Guthrie, corporate VP for cloud, pointed out that Azure users may choose one of five Linux distributions -- including CentOS, Suse, and Ubuntu -- as well as Windows on the Microsoft cloud. (In effect, Linux runs in a Hyper-V virtual machine under Windows Server.)

Guthrie and other Microsoft executives position Azure as a more open architecture now because it's committed to long-term goals in cloud computing. In the long run, a handful of cloud providers will build very large, geographically distributed data centers to support their cloud services. Only three or four will be able to sustain the effort to operate "hyper-scale data centers," adding a million servers a year to their operations, he predicted.

In its ninth year, Interop New York (Sept. 29 to Oct. 3) is the premier event for the Northeast IT market. Strongly represented vertical industries include financial services, government, and education. Join more than 5,000 attendees to learn about IT leadership, cloud, collaboration, infrastructure, mobility, risk management and security, and SDN, as well as explore 125 exhibitors' offerings. Register with Discount Code MPIWK to save $200 off Total Access & Conference Passes.

About the Author(s)

Charles Babcock

Editor at Large, Cloud

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive Week. He is a graduate of Syracuse University where he obtained a bachelor's degree in journalism. He joined the publication in 2003.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like

More Insights