Amazon used to have a clear definition of what a virtual CPU was. Now it goes by a "variety of measures," making cloud comparison difficult.

Charles Babcock, Editor at Large, Cloud

December 23, 2015

7 Min Read
<p align="left">(Image: arcoss/iStockphoto)</p>

Amazon EC2 Dedicated Hosts: Pros And Cons

Amazon EC2 Dedicated Hosts: Pros And Cons


Amazon EC2 Dedicated Hosts: Pros And Cons (Click image for larger view and slideshow.)

Once upon a time when you signed up for a cloud server, you knew what its virtual CPU represented. Each supplier might define its virtual CPU a little differently, making it hard to compare one to another. But at least there was a definition, and the customer had a firm idea what he was getting via a named, physical equivalent.

That's no longer the case at Amazon Web Services.

If you deal with server sizing and instance price comparison, then the measure -- previously expressed as an EC2 Compute Unit or ECU -- is kaput. In its place is the label "virtual CPU."

A change in nomenclature would be fine if there were a definition for the new name. However, a virtual CPU now looks suspiciously variable to me. A virtual CPU is whatever Amazon wants to offer in an instance series. The user has no firm measure to go by. This must be confusing to operations managers who have to compare instance sizes and try to do comparison pricing.

AWS started out defining its virtual CPUs as being composed of EC2 compute units, or ECUs, which it defined as an equivalent to a physical Xeon processor. On June 5, 2012, I wrote in a piece called "Why Cloud Pricing Comparisons Are So Hard":

Amazon uses what it calls "EC2 Compute Units" or ECUs, as a measure of virtual CPU power. It defines one ECU as the equivalent of a 2007 Intel Xeon or AMD Opteron CPU running at 1 GHz to 1.2 GHz. That's a historical standard, since it dates back to the CPUs with which Amazon Web Services built its first infrastructure as a service in 2006 and 2007. (The Amazon ECU is also referred to as a 2006 Xeon running at 1.7 GHz. Amazon treats the two as equivalent.)

The value of Amazon's ECU approach is that it sets a value for what constitutes a CPU for a basic workload in the service.

ECU's were not the simplest approach to describing a virtual CPU, but they at least had a definition attached to them. Operations managers and those responsible for calculating server pricing could use that measure for comparison shopping. But ECUs were dropped as a visible and useful definition without announcement two years ago in favor of a descriptor -- virtual CPU -- that means, mainly, whatever AWS wants it to mean within a given instance family.

A precise number of ECUs in an instance has become simply a "virtual CPU."

My 2012 piece on comparing prices continued with Microsoft's virtual CPU definition:

Microsoft uses a different standard CPU as the measure of its virtual CPUs -- designating the Intel Xeon 1.6 GHz CPU as its standard. This is a slightly newer chip with a stepped up clock speed, but a bit of math can approximate a direct comparison: the Microsoft virtual CPU amounts to about 62% more processing power than the Amazon one, according to Steven Martin, general manager of Microsoft's Windows Azure cloud.

So by doing a little math, you could actually compare what you were getting in virtual CPUs in EC2 versus Azure. Also by doing a little math, you knew how to compare one Amazon instance to another based on the ECU count in each virtual CPU. Microsoft didn't look too bad in the comparison.

That is one of the casualties of the nomenclature change.

I have searched for updated information on how a virtual CPU is measured and found nothing comparable to the definition of the 2012 ECU measure. I have questioned Amazon representatives three times between Oct. 27 and Dec. 21, and don't have much of an answer.

The answer basically is: Try to divine it from the instance matrix table on the AWS website.

Actually, a post two years ago by Gartner's Kyle Hilgendorf notes the passing of the ECU measure without trying to explain what new standard has replaced it.

That appears to mean a virtual CPU may be a variable within an instance family; it varies according to the value Amazon decides to assign it behind the scenes. The Jeff Barr blog announcement of the t2 series on July 1, 2014, for example, included a chart that explained the CPU of both the t2.micro and t2.small as "one virtual CPU." But the micro owned 10% of the baseline processing power of a modern Xeon core and the small owned 20%. The relationship is clear enough as listed here, but how one virtual CPU varies from the other is not visible to those visiting Amazon's instance matrix table.

Moving the Goalposts

On Dec. 15, AWS introduced the t2.nano, which has access to 5% of the baseline processing power of a Xeon core, even though it too is assigned one virtual CPU. So in place of a firm unit of measure, the EC2 compute unit, we have "virtual CPU" as a variable term. Memory doubles for each instance as you go up the scale but "one virtual CPU" remains the same. Or you can take it on faith that Amazon is increasing in step with the other resources, which I think is what Amazon is asking customers to do.

If you try to clear up what constitutes a virtual CPU for each in the Amazon instance matrix table, no "baseline processing power" percentages are listed there. They need to be searched out in product announcements or the occasional Jeff Barr evangelist blog.

The 2015 AWS FAQs page still includes a reference to ECUs, but the definition formerly attached to it has disappeared. It's the closest thing you'll find to an acknowledgement that ECUs are still in use behind the scenes, but Amazon no longer wishes to define them due to the changing nature of its underlying hardware. It says:

Amazon EC2 uses a variety of measures to provide each instance with a consistent and predictable amount of CPU capacity. In order to make it easy for developers to compare CPU capacity between different instance types, we have defined an Amazon EC2 Compute Unit. The amount of CPU that is allocated to a particular instance is expressed in terms of these EC2 Compute Units. We use several benchmarks and tests to manage the consistency and predictability of the performance from an EC2 Compute Unit. The EC2 Compute Unit (ECU) provides the relative measure of the integer processing power of an Amazon EC2 instance. Over time, we may add or substitute measures that go into the definition of an EC2 Compute Unit, if we find metrics that will give you a clearer picture of compute capacity.

[Want to learn more about why it's difficult to measure the comparative costs of the cloud? See Cloud's Thorniest Question: Does It Pay Off?]

I liked the old definition better, a 1GHz Xeon processor of 2006-2007 vintage. While this text refers to making it "easy for developers to compare," I don't see how they can do that. It's infinitely harder now than it used to be.

The FAQ adds:

Because Amazon EC2 is built on commodity hardware, over time there may be several different types of physical hardware underlying EC2 instances. Our goal is to provide a consistent amount of CPU capacity no matter what the actual underlying hardware.

This comes close to admitting the real problem of ECUs.

It was a definition built on the first round of EC2 infrastructure, and virtual CPUs became increasingly hard to express in ECUs as new rounds of hardware were introduced. To know that your CPU power increases in step with memory and other resources, you have to trust Amazon with its "variety of measures" and its ongoing search for suitable CPU metrics.

Easy to compare? It was never easy to compare, and over the last two years, it's become almost impossible.

**Elite 100 2016: DEADLINE EXTENDED TO JAN. 15, 2016** There's still time to be a part of the prestigious InformationWeek Elite 100! Submit your company's application by Jan. 15, 2016. You'll find instructions and a submission form here: InformationWeek's Elite 100 2016.

About the Author(s)

Charles Babcock

Editor at Large, Cloud

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive Week. He is a graduate of Syracuse University where he obtained a bachelor's degree in journalism. He joined the publication in 2003.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights