Each service supplier assembles a compute package that's different enough from the others that a comparison is difficult. Microsoft, for example, surpasses Amazon in CPU strength and storage. But in another important measure, Amazon offers more RAM for larger server sizes while matching Microsoft in the smaller sized servers.
Confused? The problem is there's no standard "serving size" when it comes to cloud computing, so IT pros have to juggle the many variables of RAM, local disk, CPU power, and more themselves. Is it a better deal to get lots of CPU but less RAM? The answer depends on what the cloud customer is trying to do with that computing power. For companies weighing public cloud options against running the computing services in-house, it complicates the calculation.
One metric that theoretically allows comparison from cloud to cloud is how much CPU is assigned to a vendor's virtual CPU--the amount of processing power you're getting with each designated virtual server. This approach compares a public cloud computing unit to a similar, physical server that a company might run in-house. Amazon is one big user of this stat, but as we'll see, this approach has its limits for comparison.
Amazon uses what it calls "EC2 Compute Units" or ECUs, as a measure of virtual CPU power. It defines one ECU as the equivalent of a 2007 Intel Xeon or AMD Opteron CPU running at 1 GHz to 1.2 GHz. That's a historical standard, since it dates back to the CPUs with which Amazon Web Services built its first infrastructure as a service in 2006 and 2007. (The Amazon ECU is also referred to as a 2006 Xeon running at 1.7 GHz. Amazon treats the two as equivalent.)
The value of Amazon's ECU approach is that it sets a value for what constitutes a CPU for a basic workload in the service. Amazon's micro and small server instances are assigned one ECU. Its medium server is assigned two ECUs, large is four, extra large is eight ECUs.
Another cloud vendor, CloudScaling, has adopted Amazon's measurement approach as it designs and builds public and private cloud infrastructure for customers. By using the 2007 Xeon or Opteron unit, it can tell customers the physical equivalent of how their infrastructure will deliver cloud servers. But CloudScaling is one of the few to pick up on the Amazon definition, so ECU hasn't caught on as a standard, and thus won't solve the head-to-head comparison problem. "You can't transpose that measure to other clouds," CloudScaling CTO Randy Bias said. "There's no standard right now."
Microsoft does have similar categories to Amazon's for virtual CPUs, but it uses a different physical server benchmark. Its categories of servers roughly match up with Amazon's categories: "extra small" is the same as Amazon's "micro," but then they both use small, medium, large, and extra large.
However, Microsoft uses a different standard CPU as the measure of its virtual CPUs--designating the Intel Xeon 1.6 GHz CPU as its standard. This is a slightly newer chip with a stepped up clock speed, but a bit of math can approximate a direct comparison: the Microsoft virtual CPU amounts to about 62% more processing power than the Amazon one, according to Steven Martin, general manager of Microsoft's Windows Azure cloud.
SMBs have saved big buying software on a subscription model. The new, all-digital Cloud Beyond SaaS issue of InformationWeek SMB shows how to determine if infrastructure services can pay off, too. Also in this issue: One startup's experience with infrastructure-as-a-service shows how the numbers stack up for IaaS vs. internal IT. (Free registration required.)