What were the key developments for enterprise cloud computing this year? Let's look at four big wins -- and three setbacks.
The researchers used a technique called a side channel attack to spy on use of the server's shared instruction cache over three to four hours. Doing so let them decipher enough of a 457-bit private key encryption to reduce the guesses needed to crack the encryption down to 10,000. That number is relatively small in the world of private key spying, because the task of trying all 10,000 remaining possibilities can be automated via systematic testing. The researchers used virtual machines generated under the Xen hypervisor, which is the same as the hypervisor generally used in the Amazon Web Services EC2 cloud and Rackspace Cloud.
The researchers, including Yinqian Zhang, at the University of North Carolina at Chapel Hill, and Ari Juels at the RSA Laboratories unit of EMC, said it was unlikely anyone had used their technique to infiltrate workloads on cloud servers.
Nevertheless, Juels told Dark Reading, "The upshot is that isolation in public clouds is imperfect and can potentially be breached. So highly sensitive workloads should not be placed in a public cloud. Our attack is the first solid confirmation of a long hypothesized attack vector." It remains true that it's extremely difficult to create a locked down computer system that's still engaged with the outside world. The bad guys are sure to find new ways to test the limits as they confront this architecture of virtualized servers in a cloud. Given the widespread use of virtualization in the cloud, this research provides an unsettling demonstration.
Setback #3: Cloud Pricing Is Still A Mess.
Today, it's hard to tell much from most cloud computing bills. You might see that it's higher. You can even see from a billing code how much of the increase came from a certain business unit, such as the marketing department. But you can't correlate the marketing department's workloads to the charges in the bill, to know what activities are driving usage.
What you'd really like to do is compare your cloud bill at Rackspace to what your charges would be on Google Compute Engine, Softlayer Technologies, Bluelock or Microsoft Azure. But the comparison requires hours of work at a spreadsheet, isolating information and then trying to translate it into corresponding terms.
One of the trickiest parts is figuring out how each vendor defines its standard virtual CPU and what it charges for one. Amazon calls a virtual CPU the equivalent of a 2007 Xeon or Opteron core. Google also has a physical equivalent but uses a different chip (one of two threads in a Sandy Bridge Xeon core), leaving it to you to look up the differences and drive your own definition of where virtual CPU value lies. The documentation is confusing, referring to logical core as 2.75 GCEUs (also referred to as GQs) or Google Compute Engine Units, which is also equal to about half of a Sandy Bridge. So a logical core isn't a core at all. In most cases it has only a vendor-defined relationship to some physical computing unit, with vendors all over the map.
Vendors also offer different configurations of standard servers. The major vendors do small, medium and large configurations, with several extra-large types thrown in the mix. But they each define the combination of memory, CPU and storage a little differently, making direct comparison difficult. You'd almost think they want it to be difficult.
What's also tricky is proving whether your use of a public cloud costs less than doing the same work in house. Doing so takes good system accounting procedures and knowledge of what particular functions within the data center actually cost. InformationWeek columnist Art Wittmann has thrown some healthy skepticism on this, noting that CPU performance and drive storage capacity keep climbing at logarithmic rates. "The Moore's Law advantage is immense and isn't something you should give up lightly, but some cloud providers are asking you to do exactly that," Wittmann writes.
There is help for deciphering pricing structures and figuring out how cloud computing fits into the IT budget. Some services will take the usage reporting stats provided by AWS and other vendors and use them to fill out a fuller picture. A warning: In some cases you have to give them your account identification information for them to be able to do that, which turns a lot of critical compute data about your business over to an outsider. Candidates to help include Apptio, Cloudability and Newvem, with migration and management partners such as Cloud Velocity, Skytap and Rightscale able to provide fuller usage pictures with billing information.
There's reason for hope in 2013. As I wrote this, Amazon Web Services evangelist Jeff Barr posted a blog entry saying Amazon will now make available an hourly report itemizing each server instance. That's a big step forward in visibility into the monthly cloud bill.
Multicloud Infrastructure & Application ManagementEnterprise cloud adoption has evolved to the point where hybrid public/private cloud designs and use of multiple providers is common. Who among us has mastered provisioning resources in different clouds; allocating the right resources to each application; assigning applications to the "best" cloud provider based on performance or reliability requirements.
InformationWeek Tech Digest, Nov. 10, 2014Just 30% of respondents to our new survey say their companies are very or extremely effective at identifying critical data and analyzing it to make decisions, down from 42% in 2013. What gives?