Each service supplier assembles a compute package that's different enough from the others that a comparison is difficult. Microsoft, for example, surpasses Amazon in CPU strength and storage. But in another important measure, Amazon offers more RAM for larger server sizes while matching Microsoft in the smaller sized servers.
Confused? The problem is there's no standard "serving size" when it comes to cloud computing, so IT pros have to juggle the many variables of RAM, local disk, CPU power, and more themselves. Is it a better deal to get lots of CPU but less RAM? The answer depends on what the cloud customer is trying to do with that computing power. For companies weighing public cloud options against running the computing services in-house, it complicates the calculation.
One metric that theoretically allows comparison from cloud to cloud is how much CPU is assigned to a vendor's virtual CPU--the amount of processing power you're getting with each designated virtual server. This approach compares a public cloud computing unit to a similar, physical server that a company might run in-house. Amazon is one big user of this stat, but as we'll see, this approach has its limits for comparison.
Amazon uses what it calls "EC2 Compute Units" or ECUs, as a measure of virtual CPU power. It defines one ECU as the equivalent of a 2007 Intel Xeon or AMD Opteron CPU running at 1 GHz to 1.2 GHz. That's a historical standard, since it dates back to the CPUs with which Amazon Web Services built its first infrastructure as a service in 2006 and 2007. (The Amazon ECU is also referred to as a 2006 Xeon running at 1.7 GHz. Amazon treats the two as equivalent.)
The value of Amazon's ECU approach is that it sets a value for what constitutes a CPU for a basic workload in the service. Amazon's micro and small server instances are assigned one ECU. Its medium server is assigned two ECUs, large is four, extra large is eight ECUs.
Another cloud vendor, CloudScaling, has adopted Amazon's measurement approach as it designs and builds public and private cloud infrastructure for customers. By using the 2007 Xeon or Opteron unit, it can tell customers the physical equivalent of how their infrastructure will deliver cloud servers. But CloudScaling is one of the few to pick up on the Amazon definition, so ECU hasn't caught on as a standard, and thus won't solve the head-to-head comparison problem. "You can't transpose that measure to other clouds," CloudScaling CTO Randy Bias said. "There's no standard right now."
Microsoft does have similar categories to Amazon's for virtual CPUs, but it uses a different physical server benchmark. Its categories of servers roughly match up with Amazon's categories: "extra small" is the same as Amazon's "micro," but then they both use small, medium, large, and extra large.
However, Microsoft uses a different standard CPU as the measure of its virtual CPUs--designating the Intel Xeon 1.6 GHz CPU as its standard. This is a slightly newer chip with a stepped up clock speed, but a bit of math can approximate a direct comparison: the Microsoft virtual CPU amounts to about 62% more processing power than the Amazon one, according to Steven Martin, general manager of Microsoft's Windows Azure cloud.
SMBs have saved big buying software on a subscription model. The new, all-digital Cloud Beyond SaaS issue of InformationWeek SMB shows how to determine if infrastructure services can pay off, too. Also in this issue: One startup's experience with infrastructure-as-a-service shows how the numbers stack up for IaaS vs. internal IT. (Free registration required.)
Rackspace, one of the other more established cloud computing companies, sidesteps specifying what the virtual CPU is and uses RAM memory sizes as its main differentiator in cloud server sizes and prices. In an interview, John Engates, CTO of Rackspace, says its equivalent physical CPU for a virtual CPU is a four-core AMD 2350 HE at 2 GHz, a late model 2007 processor. Rackspace says that one 2350 HE core at 2 GHz is the equivalent of two of Amazon's Elastic Cloud Compute units (ECUs). Still, even with a physical CPU reference, "it's difficult to make a comparison," Engates acknowledged. Rackspace also delineates servers at regular RAM intervals without giving them a small, medium or large designation.
What Prices Reveal About Strategy
All the variations of CPU, RAM, and disk make head-to-head comparisons difficult, but they do reveal something about each vendor's strategy.
Microsoft, as we noted, almost matches Amazon on RAM in small servers while offering more CPU. But on larger cloud servers, it concedes the RAM lead to Amazon while still offering more CPU and disk. Microsoft seems bent on competing most fiercely with Amazon at the extra small and small categories, playing for the small- to medium-sized business, or the enterprise department, while giving Amazon less of a fight for the largest and most demanding cloud customers.
Rackspace's pricing shows less appetite for a confrontation with Amazon. It was the first to really push a super-small server option--offering an entry-level server for 1.5 cents an hour when Amazon's lowest priced unit was 8.5 cents. That small-and-cheap offering could be significant in a market where many prospective users start very small, either with a startup or with a small enterprise app dev use. Amazon and Microsoft eventually countered with their very small server options, but none gets as low as Rackspace with an instance of less RAM (a 256-MB server), still at just 1.5 cents an hour.
All this highlights the highly sophisticated analysis that might be needed to truly scope out the price-and-value of public cloud computing--especially if a big company is going to adopt it for large-scale computing, where small cost differences can up to big numbers.
Consider Netflix's experience. Amazon has set a high standard for performance with its infrastructure as a service. But customer Netflix found through its own careful workload performance analysis that it could lose 1% or more of its CPU cycles to "noisy neighbors" if its workload landed on a physical server that was already busy with another customer's active virtual machines. Netflix chief cloud architect Adrian Cockcroft aired his staff's findings in a March 2011 blog, explaining that the wait for access to I/O and other resources crimped performance. Netflix started commissioning only the largest sized virtual machines. That meant a Netflix workload dominated its Amazon host and might even be the only resident on it. Most customers in the multi-tenant cloud can't do that because they don't have large enough workloads--or probably the analytical experience to figure out the cost basis for it.
If an IT shop is using Rackspace, the picture is complicated even farther by the fact that each Rackspace virtual server has access to all four cores of an AMD 2350 HE host, with a weighted share of each core guaranteed. However, if the other customer on that core isn't using all its CPU cycles--maybe it's a website in a low traffic period--the other customers can use those cycles for free. But that's unpredictable and makes comparisons to the physical server world all the more difficult. Some benchmarks published on customer experience with Rackspace indicate workloads run faster there than in other cloud settings, but that success may depend on landing on hosts where the other customers had lightweight workloads at the time the test ran. On an occupied server at the end of the quarter, when companies are using capacity to close their books, the benchmark results might be different.
More than 900 IT and security professionals responded to InformationWeek’s 2012 Strategic Security Survey. Our results cover a variety of areas critical to information risk management, including cloud, mobility, and software development. Download the 2012 Strategic Security report now. (Free registration required.)
Erik Carlin, Rackspace director of products, said tests have been consistent over time because "those free cycles [are] available most of the time." If Rackspace needed to improve its profit margins, however, one way to do so would be to stack more customers on the same number of servers.
Other Measurements Matter
Even if IT departments manage to get comfortable with CPU, RAM, and storage comparisons, there are other elements that will still have a big impact on cloud performance versus its cost.
Speed of data retrieval is one variable. Jason Read, co-founder of the cloud metrics compiling startup, CloudHarmony.com, offers the example of newcomer cloud vendor Softlayer offering pricing competitive with Amazon, but that's a bit slower because of "1 Gbps SAN off instance storage," Read says, because it doesn't have local disk on the server where the CPU is and relies on a connection to where disk is located on the network.
There also can be different billing cycles. Hosting.com charges by the month, not by the hour. Its basic Linux server is $145, plus $20 for storage. The rough equivalent from Amazon would be its medium Windows server at 23 cents an hour or $165.60 in a 30-day month, but the Hosting.com server is running CentOS Linux, a Red Hat knockoff. It's not clear from the website if the Hosting.com virtual CPU is equivalent to Amazon's medium class server with two ECUs.
And of course, there are different features vendors try to emphasize. Google App Engine offers a basic virtual server for 8 cents an hour, but it also comes with a package of development services and tools for producing software that deploys on App Engine--a model referred to as platform-as-a-service. Microsoft also promotes its Azure service's development platform, and Hewlett-Packard, which is just entering the cloud market, likewise emphasizes the range of development and software services it offers as a platform along with infrastructure.
When it comes to determining the value to IT, it looks like would-be cloud customer will have to make some guesstimates in comparing what these less easily matched up factors are worth. "It's unfortunate," said Cloudscaling's Bias, "but it's still the very early days of cloud computing. We haven't seen any real, credible threats to Amazon."
The reality is that not many companies have moved large-scale enterprise IT workloads into infrastructure as a service. As IT considers doing that, more will tackle the kind of head-to-head, numbers-driven bakeoffs of cloud providers discussed here. If they do that more, you might start to see a thinning out providers of infrastructure as a service, which today is a large and still growing field. What's likely to happen is more companies--like HP--will enter emphasizing more than raw computing, talking up related services such as development tools. Likewise, we'll see cloud infrastructure aimed at specific industry needs.
Even if the day arrives that there is a shared standard for the virtual CPU, the customer will still need a sharp pencil to calculate which service is the right one for a given company. For now, and likely for a while to come, no one seems eager to make head-to-head comparisons easy.
Private clouds are more than a trendy buzzword--they represent Virtualization 2.0. For IT organizations willing to dispense with traditional application hosting models, a plethora of pure cloud software options beckons. Our Understanding Private Cloud Stacks report explains what's available. (Free registration required.)