Amazon's 'Virtual CPU'? You Figure It Out - InformationWeek
IoT
IoT
Cloud
Commentary
12/23/2015
08:05 AM
Charles Babcock
Charles Babcock
Commentary
Connect Directly
Twitter
RSS
50%
50%

Amazon's 'Virtual CPU'? You Figure It Out

Amazon used to have a clear definition of what a virtual CPU was. Now it goes by a "variety of measures," making cloud comparison difficult.

Amazon EC2 Dedicated Hosts: Pros And Cons
Amazon EC2 Dedicated Hosts: Pros And Cons
(Click image for larger view and slideshow.)

Once upon a time when you signed up for a cloud server, you knew what its virtual CPU represented. Each supplier might define its virtual CPU a little differently, making it hard to compare one to another. But at least there was a definition, and the customer had a firm idea what he was getting via a named, physical equivalent.

That's no longer the case at Amazon Web Services.

If you deal with server sizing and instance price comparison, then the measure -- previously expressed as an EC2 Compute Unit or ECU -- is kaput. In its place is the label "virtual CPU."

A change in nomenclature would be fine if there were a definition for the new name. However, a virtual CPU now looks suspiciously variable to me. A virtual CPU is whatever Amazon wants to offer in an instance series. The user has no firm measure to go by. This must be confusing to operations managers who have to compare instance sizes and try to do comparison pricing.

(Image: arcoss/iStockphoto)

(Image: arcoss/iStockphoto)

AWS started out defining its virtual CPUs as being composed of EC2 compute units, or ECUs, which it defined as an equivalent to a physical Xeon processor. On June 5, 2012, I wrote in a piece called "Why Cloud Pricing Comparisons Are So Hard":

Amazon uses what it calls "EC2 Compute Units" or ECUs, as a measure of virtual CPU power. It defines one ECU as the equivalent of a 2007 Intel Xeon or AMD Opteron CPU running at 1 GHz to 1.2 GHz. That's a historical standard, since it dates back to the CPUs with which Amazon Web Services built its first infrastructure as a service in 2006 and 2007. (The Amazon ECU is also referred to as a 2006 Xeon running at 1.7 GHz. Amazon treats the two as equivalent.)

The value of Amazon's ECU approach is that it sets a value for what constitutes a CPU for a basic workload in the service.

ECU's were not the simplest approach to describing a virtual CPU, but they at least had a definition attached to them. Operations managers and those responsible for calculating server pricing could use that measure for comparison shopping. But ECUs were dropped as a visible and useful definition without announcement two years ago in favor of a descriptor -- virtual CPU -- that means, mainly, whatever AWS wants it to mean within a given instance family.

A precise number of ECUs in an instance has become simply a "virtual CPU."

My 2012 piece on comparing prices continued with Microsoft's virtual CPU definition:

Microsoft uses a different standard CPU as the measure of its virtual CPUs -- designating the Intel Xeon 1.6 GHz CPU as its standard. This is a slightly newer chip with a stepped up clock speed, but a bit of math can approximate a direct comparison: the Microsoft virtual CPU amounts to about 62% more processing power than the Amazon one, according to Steven Martin, general manager of Microsoft's Windows Azure cloud.

So by doing a little math, you could actually compare what you were getting in virtual CPUs in EC2 versus Azure. Also by doing a little math, you knew how to compare one Amazon instance to another based on the ECU count in each virtual CPU. Microsoft didn't look too bad in the comparison.

That is one of the casualties of the nomenclature change.

I have searched for updated information on how a virtual CPU is measured and found nothing comparable to the definition of the 2012 ECU measure. I have questioned Amazon representatives three times between Oct. 27 and Dec. 21, and don't have much of an answer.

The answer basically is: Try to divine it from the instance matrix table on the AWS website.

Actually, a post two years ago by Gartner's Kyle Hilgendorf notes the passing of the ECU measure without trying to explain what new standard has replaced it.

That appears to mean a virtual CPU may be a variable within an instance family; it varies according to the value Amazon decides to assign it behind the scenes. The Jeff Barr blog announcement of the t2 series on July 1, 2014, for example, included a chart that explained the CPU of both the t2.micro and t2.small as "one virtual CPU." But the micro owned 10% of the baseline processing power of a modern Xeon core and the small owned 20%. The relationship is clear enough as listed here, but how one virtual CPU varies from the other is not visible to those visiting Amazon's instance matrix table.

Moving the Goalposts

On Dec. 15, AWS introduced the t2.nano, which has access to 5% of the baseline processing power of a Xeon core, even though it too is assigned one virtual CPU. So in place of a firm unit of measure, the EC2 compute unit, we have "virtual CPU" as a variable term. Memory doubles for each instance as you go up the scale but "one virtual CPU" remains the same. Or you can take it on faith that Amazon is increasing in step with the other resources, which I think is what Amazon is asking customers to do.

If you try to clear up what constitutes a virtual CPU for each in the Amazon instance matrix table, no "baseline processing power" percentages are listed there. They need to be searched out in product announcements or the occasional Jeff Barr evangelist blog.

The 2015 AWS FAQs page still includes a reference to ECUs, but the definition formerly attached to it has disappeared. It's the closest thing you'll find to an acknowledgement that ECUs are still in use behind the scenes, but Amazon no longer wishes to define them due to the changing nature of its underlying hardware. It says:

Amazon EC2 uses a variety of measures to provide each instance with a consistent and predictable amount of CPU capacity. In order to make it easy for developers to compare CPU capacity between different instance types, we have defined an Amazon EC2 Compute Unit. The amount of CPU that is allocated to a particular instance is expressed in terms of these EC2 Compute Units. We use several benchmarks and tests to manage the consistency and predictability of the performance from an EC2 Compute Unit. The EC2 Compute Unit (ECU) provides the relative measure of the integer processing power of an Amazon EC2 instance. Over time, we may add or substitute measures that go into the definition of an EC2 Compute Unit, if we find metrics that will give you a clearer picture of compute capacity.

[Want to learn more about why it's difficult to measure the comparative costs of the cloud? See Cloud's Thorniest Question: Does It Pay Off?]

I liked the old definition better, a 1GHz Xeon processor of 2006-2007 vintage. While this text refers to making it "easy for developers to compare," I don't see how they can do that. It's infinitely harder now than it used to be.

The FAQ adds:

Because Amazon EC2 is built on commodity hardware, over time there may be several different types of physical hardware underlying EC2 instances. Our goal is to provide a consistent amount of CPU capacity no matter what the actual underlying hardware.

This comes close to admitting the real problem of ECUs.

It was a definition built on the first round of EC2 infrastructure, and virtual CPUs became increasingly hard to express in ECUs as new rounds of hardware were introduced. To know that your CPU power increases in step with memory and other resources, you have to trust Amazon with its "variety of measures" and its ongoing search for suitable CPU metrics.

Easy to compare? It was never easy to compare, and over the last two years, it's become almost impossible.

**Elite 100 2016: DEADLINE EXTENDED TO JAN. 15, 2016** There's still time to be a part of the prestigious InformationWeek Elite 100! Submit your company's application by Jan. 15, 2016. You'll find instructions and a submission form here: InformationWeek's Elite 100 2016.

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
SaneIT
50%
50%
SaneIT,
User Rank: Ninja
1/4/2016 | 8:35:26 AM
Re: I think you have it figured out
I think that eventually they will start to blend better than they do now.  Remember when AMD had really simple processor names like K4, K5, K6 and Intel names looked like Ultra Low Voltage Mobile Pentium III Processor-M?  It was a maze to figure out what similar offerings were between the two.  This is just one very simple example but it happened for some of the same reasons.
batye
50%
50%
batye,
User Rank: Ninja
1/1/2016 | 8:58:28 PM
Re: I think you have it figured out
@tzubair, I think living in the new IT/Technology age changing our dictionary and the way we are communicating... 
tzubair
50%
50%
tzubair,
User Rank: Ninja
12/30/2015 | 10:20:45 PM
Re: I think you have it figured out
"When their offerings change we can expect some changes in the verbiage used to describe them. "

@SaneIT: And hopefully the new verbiage would be something that's standard across the companies and provides a means to easy comparison. I wonder if there's any body working towards standardizing cloud offerings. I have always felt that there hasn't been much work done on establishing a regulatory or a controlling body when it comes to managing cloud vendors.
SaneIT
50%
50%
SaneIT,
User Rank: Ninja
12/28/2015 | 8:37:08 AM
Re: I think you have it figured out
@tzubair, that could be too, but I think at the same time the IAAS vendors are starting to see what they can actually offer.  Their environments are changing rapidly and they can offer us things today that they couldn't a couple of years ago.  When their offerings change we can expect some changes in the verbiage used to describe them. 
tzubair
50%
50%
tzubair,
User Rank: Ninja
12/27/2015 | 11:54:54 AM
Re: I think you have it figured out
"It may not be easy to compare across platforms now but the market is still maturing so some variation in nomenclature is expected."

@SaneIT: I think the change is coming about because more and more IT professionals on the enterprise side (particularly on the senior management side) are becoming more aware of the technicalities with cloud-based infrastructure. I still feel a lot more companies need to train their resources to make them understand how the cloud platform works and what to expect from their vendors.
tzubair
50%
50%
tzubair,
User Rank: Ninja
12/27/2015 | 11:45:22 AM
Re: Amazon prefers to obscure the issue
@Charlie: I think most enterprise customers would prefer common units and measures for comparison to be able to pick the best solution suiting their needs. When it comes to a purchase decision where a lot is at stake in terms of the budget and requirements, not having enough visibility and clarity will make it difficult for the customers and will negatively affect the whole industry. All the players should consider this.
dcrafti
50%
50%
dcrafti,
User Rank: Apprentice
12/25/2015 | 6:54:19 AM
Misunderstanding t2 instances
I think the article misunderstands t2 instances. They are burst mode instances, which have a low baseline (the 5%, 10% and 20% figures quoted), but can burst up to a higher number for some proportion of the time, with the bigger instances being able to burst for longer. They probably all have the exact same vCPU speed when they're bursting, which is why they are each described as 1 vCPU.
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
12/23/2015 | 4:14:18 PM
Amazon prefers to obscure the issue
I think you could stick to a defined measure if you really tried to. Microsoft is challenging hard enough that Amazon prefers to obscure the issue and muddy any comparisons, in my opinion. 
SaneIT
50%
50%
SaneIT,
User Rank: Ninja
12/23/2015 | 10:13:41 AM
I think you have it figured out
"AWS introduced the t2.nano, which has access to 5% of the baseline processing power of a Xeon core"  

 

"It was a definition built on the first round of EC2 infrastructure, and virtual CPUs became increasingly hard to express in ECUs as new rounds of hardware were introduced." 

 

As Amazon stretches out it's offerings it gets harder to define what a CPU is.  Do you need all the cycles of a current generation i5, i7, or do you have a tiny requirement that only needs a fraction of those cycles?  Some applications of AWS won't need a full physical CPU style assignment so why lock that down?  Why charge for it?  It may not be easy to compare across platforms now but the market is still maturing so some variation in nomenclature is expected.

 
How Enterprises Are Attacking the IT Security Enterprise
How Enterprises Are Attacking the IT Security Enterprise
To learn more about what organizations are doing to tackle attacks and threats we surveyed a group of 300 IT and infosec professionals to find out what their biggest IT security challenges are and what they're doing to defend against today's threats. Download the report to see what they're saying.
Register for InformationWeek Newsletters
White Papers
Current Issue
Digital Transformation Myths & Truths
Transformation is on every IT organization's to-do list, but effectively transforming IT means a major shift in technology as well as business models and culture. In this IT Trend Report, we examine some of the misconceptions of digital transformation and look at steps you can take to succeed technically and culturally.
Video
Slideshows
Twitter Feed
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.
Flash Poll