CenturyLink Pursues Big Data, High Traffic Workloads
Adds new set of Hyperscale servers equipped with SSDs to address most ambitious public cloud users.
16 Top Big Data Analytics Platforms
(Click image for larger view and slideshow.)
CenturyLink Cloud wants to compete for the big data users, high-traffic websites, and distributed analytics applications that many enterprises run off and on throughout the month. So on Wednesday the company is launching a new set of Hyperscale virtual servers to address these most ambitious public cloud uses.
Its Hyperscale virtual servers come equipped with solid state drives -- not a mix of flash and hard drives, but 100% flash -- to maximize the servers' I/O throughput no matter how large the application. The flash is embedded alongside the RAM in dual in-line memory slots on the motherboard of the physical host itself, a trick IBM performed in its new class of X6 servers.
The Hyperscale SSDs are closely associated with a host in the same rack rather than on a storage area network device somewhere outside the rack or on the other side of the datacenter. That allows a Hyperscale instance to perform 15,000 input/output operations a second (IOPS) for compute-intensive workloads, according to Richard Seroter, head of product management for CenturyLink Cloud. In an interview, he explained, "These servers are using local storage near each individual host." Standard cloud servers, including CenturyLink's, rely on SAN-based disk drives.
CenturyLink's announcement is another example of how the intensive compute part of the cloud services market has heated up in recent months. In November, Amazon introduced a set of compute-intensive instances, the C3 series, which feature up to 32 virtual CPUs, and in late December it introduced four I/O-intensive instances with its i2 series. Rackspace also has its own SSD-equipped compute-intensive servers, as does Google.
Unlike Amazon, CenturyLink will allow customers to construct a Hyperscale server with whatever amounts of memory, CPU, and storage they want, and the company will charge the same amount per GB of RAM or CPU power as it does for standard servers. That's a departure from Amazon's approach of presenting specific combinations of the three resources for each instance type, a practice that drew a swipe from Seroter. "There's no premium charge. Hyperscale is priced exactly the same as the public cloud," he said. "I don't want to nickel-and-dime you by pricing separately for high volume I/O and storage."
But comparing pricing in the cloud can be a tricky, mind-bending exercise. CenturyLink charges $0.07 an hour for a 2 GHz or 2+ GHz CPU in the class of Intel's Ivy Bridge Xeons. The recent Ivy Bridge E5-2670 used by both Amazon and CenturyLink is listed as running at 2.5 to 3.3 GHz. Up to 16 such CPUs can be assembled into a CenturyLink Hyperscale server.
CenturyLink's price of $0.07 per CPU per hour might seem like a bargain compared to the $0.11 that Amazon charges for one of its basic instances: an M3 medium server whose single virtual CPU is equivalent to 3 GHz of processing power. A more precise way of saying that is that M3 CPU is equal to three units of Amazon's special measure, the EC2 compute unit. An ECU is equivalent to a 2007 Xeon processor core running at 1 GHz. But Amazon's price also includes 3.75 GB of RAM. CenturyLink charges an additional $0.04 per GB per hour for RAM. So an equivalent 3.75 GBs at CenturyLink adds another $0.15 per hour, for a total of $0.22 per hour.
Nevertheless, Seroter pointed out that by configuring only the amount of each resource that they want, CenturyLink customers can escape the overprovisioning that can occur when a vendor's resource estimate doesn't match the customer's needs. A Hyperscale server may be configured with up to 16 CPUs, 128 GB of RAM, and a TB of SSD storage.
Seroter said CenturyLink users add resources that allow their virtual servers to scale out, as the elastic cloud was intended. This approach prevents customers from being stuck inside someone else's definition of a good combination of resources.
Amazon, however, is hardly being upstaged. Its November announcement of "compute-optimized" C3 servers introduced the C3 extra-large, with four virtual CPUs equal to 14 EC2 compute units for $0.30/hour; the double extra-large, with eight virtual CPUs equal to 28 ECUs for $0.60 an hour; the quadruple extra-large, with 16 virtual CPUs and 55 ECUs for $1.20 an hour, etc.
Amazon is equipping its general-purpose M instances as well as the C3 series with SSDs, so that form of storage is less of an advantage to competitors than it used to be. It supplemented the new compute-intensive instances with I/O-intensive types in its I2 instances, equipped with large amounts of SSDs. Eight virtual CPU I2 instances with 61 GB of RAM and 1,600 GBs of SSDS is priced at $1.705 an hour.
It can be difficult to sort through the different cloud services and server types to find the best bargain. But with so many high-end services constantly improving their I/O rates to compete for market share, it's clear that big data, effective website operation, and frequent analytics are more crucial than ever among young companies getting established on the Web as well as their more mature enterprise counterparts.
Engage with Oracle president Mark Hurd, NFL CIO Michelle McKenna-Doyle, General Motors CIO Randy Mott, Box founder Aaron Levie, UPMC CIO Dan Drawbaugh, GE Power CIO Jim Fowler, and other leaders of the Digital Business movement at the InformationWeek Conference and Elite 100 Awards Ceremony, to be held in conjunction with Interop in Las Vegas, March 31 to April 1, 2014. See the full agenda here.
Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive ... View Full Bio
Multicloud Infrastructure & Application ManagementEnterprise cloud adoption has evolved to the point where hybrid public/private cloud designs and use of multiple providers is common. Who among us has mastered provisioning resources in different clouds; allocating the right resources to each application; assigning applications to the "best" cloud provider based on performance or reliability requirements.