Dimension Data Challenges Amazon With Performance Data
Dimension Data-sponsored research puts telco unit on top of most cloud vendor performance benchmarks. What are advantages in using a telco-owned supplier of cloud computing?
10 Tools To Prevent Cloud Vendor Lock-in
(click image for larger view and for slideshow)
The Tolly Group has reported that Dimension Data is faster than Amazon Web Services, Rackspace and IBM SmartCloud in a series of benchmarks. Dimension Data sponsored the tests, and the results have to be taken with a grain of salt.
Dimension Data is the sizeable cloud unit of NTT, which operates in 49 countries. It operates data centers in Virginia and California and at last report was opening one in Europe.
Dimension Data also acquired the cloud skills and software of OpSource in June 2011. The benchmark results shouldn't be taken at face value, but they may suggest that there are some advantages in using a telco-owned supplier of cloud computing. There are other such suppliers available as well, such as Terremark, now owned by Verizon, and Savvis, part of CenturyLink.
In a virtual CPU face-off with Amazon, the Tolly used the C-Ray 1.1 benchmark, part of the Phoronix Test Suite 3.6.1. Dimension Data processed a Linux workload in about two-thirds the time of Amazon Web Services, 606 seconds versus 909 seconds, according to the Tolly report. The test was run on the smallest virtual CPU from each vendor.
To understand these results, it would be necessary to compare what constitutes a virtual CPU for each party; each cloud supplier tends to define a virtual CPU a little differently. For Amazon, a virtual CPU is the equivalent of an EC2 compute unit (ECU); a compute unit is the equivalent of a Xeon processor running at 1 GHz to 1.2 GHz of 2006 or 2007 vintage, as Amazon states publicly on its AWS web site. The physical equivalent of a Dimension Data virtual CPU was not immediately evident from a search of its website.
No benchmark using a single virtual CPU was executed against Rackspace or IBM because they don't offer a single virtual CPU in their server options, the Tolly report said. But a benchmark using two virtual CPUs yielded results that might have been a little different than Dimension Data expected. It showed IBM ran the benchmark in 190 seconds to Dimension's 284 seconds; Rackspace was nipping at Dimension's heels at 289 seconds. Dimension may have wanted to make the point that compared to Amazon at 433 seconds, it still looked good. It's also possible that one type of benchmark will favor one cloud vendor, a second another.
When the Tolly Group tested four virtual CPUs, the same pattern held. IBM was the fastest at 101 seconds, Dimension Data at 141, Rackspace at 149 and Amazon at 227.
In a test of memory speed using the RAMSpeed 3.5 benchmark, also part of the Phoronix suite, the Tolly Group reported that Dimension Data recorded the highest number of memory operations per second in two tests. With about four GBs of RAM, a Dimension Data server executed 10,831 operations; Amazon, 2,523; IBM, 9,985; Rackspace, 6,522. When 8 GBs was used in the same test, Dimension Data executed 18,542 operations; Amazon, 3,200; IBM, 8,772; Rackspace, 7,818.
In some notes on the benchmarking, Tolly reported the disparate nature of small, medium and large server instances provided by each vendor. It said "the closest configurations were used," meaning some discrepancies in memory size existed in the test.
The Postmark 5.1 benchmark, part of the Phoronix test suite, measures movement of small files off local disk in a transaction type of application. Dimension Data pulled away from the crowd in this test. Amazon performed strongly in this test, stronger than IBM and Rackspace, though nowhere near as fast as Dimension Data.
On a server with a two virtual CPUs and 4 GB of RAM, Dimension Data processed 3,472 transactions per second; Amazon, 1,278; IBM, 684; and Rackspace, 642.
On a server with four virtual CPUs and 8 GB of RAM, Dimension Data again processed 3,472 transactions per second; Amazon, 1,342; IBM, 527, or fewer than reported from the smaller server instance; Rackspace, 659. These results lacked any explanation that might account for why Dimension Data came up with exactly the same total for two different servers or how a larger server produced lower results for IBM. They seem to defy common sense and will be checked into further. The movement of files was managed by the default file system of each vendor's configuration.
A fourth test, the open source Iperf 2.0.4 network performance benchmark, tested the capability of the respective, intra-data center LAN of each vendor. It was another opportunity for Dimension Data to shine. The Tolly Group reported that Dimension Data, running a server with two virtual CPUs and 4 GB of RAM was able to achieve an average bi-directional throughput of 3,260 Mbps; Amazon, 1,052 Mbps; IBM, 1,834 Mbps; Rackspace, 377.
Using a larger server with four virtual CPUs and 8 GB of RAM, Dimension Data achieved 4,463 Mbps; Amazon, 1,244 Mbps; IBM, 1,864 Mbps; Rackspace, 479 Mbps.
"The results show only Dimension Data delivers true Gigabit Ethernet-class throughput," the Tolly Group boasted on behalf of its sponsor.
One shortcoming of the benchmark is that it occurred before IBM acquired SoftLayer, an experienced IaaS provider, and it's not clear whether benchmarks conducted on SoftLayer IaaS would have yielded the same results.
Taken with a grain of salt, these tests indicate that a telco cloud unit may show high performance characteristics in the area of local file movement and maximizing the speed of data transfer between servers in the cloud data center.
Multicloud Infrastructure & Application ManagementEnterprise cloud adoption has evolved to the point where hybrid public/private cloud designs and use of multiple providers is common. Who among us has mastered provisioning resources in different clouds; allocating the right resources to each application; assigning applications to the "best" cloud provider based on performance or reliability requirements.
InformationWeek Tech Digest, Nov. 10, 2014Just 30% of respondents to our new survey say their companies are very or extremely effective at identifying critical data and analyzing it to make decisions, down from 42% in 2013. What gives?