Amazon Trumps Microsoft With C4 Virtual Servers - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Cloud // Infrastructure as a Service
10:59 AM
Connect Directly

Amazon Trumps Microsoft With C4 Virtual Servers

Amazon outstrips Microsoft's new Azure offering with launch of C4 Xlarge, a 36-virtual-CPU server instance.

IT Hiring, Budgets In 2015: 7 Telling Stats
IT Hiring, Budgets In 2015: 7 Telling Stats
(Click image for larger view and slideshow.)

Four days after Microsoft introduced its G series, which it claimed are the "largest ... of any virtual machine size currently available in the public cloud," Amazon Web Services has brought forth its new C4 Series instances, which are larger.

The C4s are designed for the largest data-processing and e-commerce workloads sent to the cloud. The AWS C4 instances were announced Nov. 13 at Amazon's Re:Invent event in Las Vegas.

Both Amazon C4s and Microsoft's G series virtual machines are based on version 3 of Intel's E5 Xeon chips, known as the Haswell family. Microsoft uses E5-2600 chips in its Azure hosts.

[Want to learn more about Microsoft's efforts to compete with Amazon? See Microsoft Matches Amazon With Cloud Crypto Key Storage.]

Amazon uses a chip customized by Intel for its EC2 servers. It is an E5-2666 running at 2.9 GHz but capable of using a design feature, Intel's TurboBoost, to increase the tempo to 3.5 GHz. For that increase to occur, other CPUs on the host need to be running at less than their full capacity. Any cores running at 3.5 GHz must stay within the power consumption and temperature limits set for the host as a whole.

The new Amazon C4 may utilize up to 36 virtual CPUs, with each virtual CPU representing half of an E5-2666 core. The E5-2666s go into Amazon servers packed with up to 18 cores each. The E5 generation is double-threaded, so each C4 virtual CPU in effect represents half the Haswell chip. The maximum virtual CPUs per C4 goes as high as 36.

The largest Microsoft G Series can use up to 32 virtual CPUs, said Azure program management director Corey Sanders in a blog posted January 8. Microsoft didn't define what now constitutes a virtual CPU in the Azure camp, although it used to be the equivalent of a 1.6-GHz Xeon core. It also didn't state the chip speed of the Xeons that it is running, but a standard 2600 v3 can run at a variety of speeds from 1.6 GHz to 3.5 GHz.

Cloud vendors seldom compete on the basis of chip speeds, since the fastest chips are also the most expensive, and they're looking for a mass-produced price/performance sweet spot in the market. They order servers by the tens of thousands.

Haswell micro architecture die. (Source: Amazon)
Haswell micro architecture die.
(Source: Amazon)

Amazon, however, has taken a first step toward competing on-chip performance by specifying the capabilities of the custom Haswell E5-2666 chip built for it.

"We are able to deliver more cores in the form of 36 vCPUs on the c4.8xlarge instance type," AWS's chief evangelist Jeff Barr wrote in a blog posted Monday, referring to hosts with 18 cores.

The C4 Large comes with 3.75 GB of RAM; the C4 8 Xlarge comes with 60 GB of RAM. The C4 Large is priced at $0.116 per hour; the C4 8 Xlarge is priced at $1.856 per hour.

Microsoft's largest G instance, the G5, comes with 32 virtual CPUs and a massive 448 GB of memory. It costs $9.65 per hour. An updated price list shows the G1 with two cores and 28 GB of RAM priced at $0.67 per hour. The number of cores, amount of memory, and price doubles for each uptick in the G series (except the G5): the G2 costs $1.34 per hour; the G3 $2.68 per hour; and the G4 $5.36 per hour.

Attend Interop Las Vegas, the leading independent technology conference and expo series designed to inspire, inform, and connect the world's IT community. In 2015, look for all new programs, networking opportunities, and classes that will help you set your organization’s IT action plan. It happens April 27 to May 1. Register with Discount Code MPOIWK for $200 off Total Access & Conference Passes.

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive ... View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
User Rank: Ninja
1/13/2015 | 8:53:35 PM
Re: Big memories lead to big bills
@Charlie B,

I would hope that's not non-stop. If you're running that type of workload, it's probably not something that would be in the cloud just because of ISP fees alone. So hopefully it's on an as-needed basis. Maybe you'll use .08% of the processing power for a few days then drop off to average usage.
Charlie Babcock
Charlie Babcock,
User Rank: Author
1/13/2015 | 6:56:35 PM
Big memories lead to big bills
If you really need that much memory, you're willing to pay for it. That $9.65 an hour for an Azure G5 with 448 GB of memory adds up if you're running it non-stop. That would be $7,180 a month, or $86,160 a year. 
Think Like a Chief Innovation Officer and Get Work Done
Joao-Pierre S. Ruth, Senior Writer,  10/13/2020
10 Trends Accelerating Edge Computing
Cynthia Harvey, Freelance Journalist, InformationWeek,  10/8/2020
Northwestern Mutual CIO: Riding Out the Pandemic
Jessica Davis, Senior Editor, Enterprise Apps,  10/7/2020
White Papers
Register for InformationWeek Newsletters
Current Issue
[Special Report] Edge Computing: An IT Platform for the New Enterprise
Edge computing is poised to make a major splash within the next generation of corporate IT architectures. Here's what you need to know!
Flash Poll