Watch out, Amazon Web Services. These younger cloud companies bring new architectures and provisioning methods to the game.

Charles Babcock, Editor at Large, Cloud

January 2, 2014

13 Min Read

The cloud is old enough, and Amazon Web Services mature enough at age seven, to make you wonder: Could market leader AWS be outflanked by a fresh architecture or a supplier with a more flexible method of provisioning user-designed servers?

So far, cloud vendors have defined services using their own templates, with a given amount of CPU, memory, and storage. Market leaders Amazon, Google, Microsoft, and Rackspace each do it roughly the same way.

But younger companies are trying new architectures and more flexible provisioning methods. In the world of cloud computing, where names like Savvis, Terremark, CSC, and SoftLayer predominate, here's a six-pack of contenders to be the next big name in cloud computing.

DigitalOcean in New York has an architecture that could pose a challenge one day to Amazon, given its heavy reliance on solid-state disks and high-speed provisioning. But the firm has also quickly learned there is more to cloud services than a speedy architecture and new components.

Digital Ocean incorporates 1.92 TBs of solid-state storage per host server, which yields high-speed provisioning for small servers, costing 1.5 cents an hour or just $5 a month. That has appealed to programmer users, particularly members of the Ruby community. DigitalOcean grew fast between January and June 2013, adding 6,996 servers, with AWS one of the few service providers growing faster than that. In the last two months of the year, it added another 6,514 servers, compared to Amazon's 6,269, according to Netcraft, the surveyor of web-facing servers.

[Want to learn more about how DigitalOcean got started? DigitalOcean: Developer-Friendly Cloud Service On A Budget.]

DigitalOcean, Netcraft said, was the only cloud service provider that was adding more servers than Amazon: "DigitalOcean is now growing faster than Amazon Web Services… Together the two companies accounted for more than a third of the Internet-wide growth in web-facing computers."

One user, Kenneth White, was able to find 18 GBs of a previous user's data, when he was assigned unwiped SSDs as his storage. In the normal course of events, the data would have been overwritten by the new user, but White decided to look at what a new user would see if he decided to snoop before writing his own data to disk. He informed DigitalOcean that a previous user's data was "leaking" into his own server use, and DigitalOcean responded that it was dismayed at the oversight and it would write code to prevent that from happening in the future, according to Wired.

That was an exchange that took placed between March 26 and April 2. So why was VentureBeat able to report Dec. 30 that DigitalOcean was once again having a problem with data leaks?  On Dec. 29, a hacker in Berlin, Jeffrey Paul, posted on GitHub that he had discovered a previous customer's data on his just commissioned DigitalOcean virtual server, setting off a storm of negative comments.

Wiping SSDs appears to have impeded on-boarding new users, according to a blog explaining what went wrong by co-founder Moisey Uretsky. DigitalOcean was growing so fast it appears to have chosen to skip executing a wipe as its default offering and instituted a wipe only upon explicit user request in its place. Many users didn't realize the default had been replaced -- DigitalOcean made no public announcement of the change -- and users neglected to order the wipes. That left DigitalOcean able to more speedily reassign their storage, with the predictable exposure to the previous customer, whose data was still resident on the SSD.

In the interest of "transparency," Uretsky explained in a blog Dec. 30: "This is an issue that we cleared up earlier in the year with scrubbing the drive… However as utilization of our cloud went up, we saw that [default] scrubbing was starting to cause degradation in performance..." The firm decided to make scrubbing an option that had to be designated by the user. After the Dec. 29 blowup, DigitalOcean engineers again "are updating the code base right now to ensure that [scrubbing] will be the default behavior," he wrote. 

lt seems that with great growth in the cloud comes great responsibilities, as Amazon could probably testify. Data security is a paramount concern, and DigitalOcean in this initial go-round has shown itself not ready for all the responsibilities that make a large service trustworthy for the long haul. Nevertheless, a company that ranked 568 among cloud suppliers in January 2013 was number 72 by June and 15 by December, according to Netcraft. The young New York firm has speedy servers, bargain-basement prices, and a growing list of tools for developers. If it gets itself straight on disk scrubbing, it will bear watching to see what happens between now and December 2014.

CentriLogic is one of those little companies you've never heard of that remains a long shot. But it looks to be in the right position at the right time.

It's headquartered in Toronto, so it offers entrée to the Canadian market, which has its own protections in place to prevent US cloud suppliers from storing a Canadian citizen's health data, due to the threat of US government snooping.

Also, like Bluelock in Indianapolis, which has a growing disaster recovery business, it's far enough away from the swath of Hurricane Sandy's destruction to make it an attractive location as a backup and disaster recovery site for companies on the East Coast. In the hurricane's aftermath, some companies in the New York metropolitan area found Amazon's US East location in Ashburn, Va., too exposed, even though it continued operating on emergency power through the storm. 

CentriLogic has also built out multiple locations. It has a datacenter in Toronto and two in Mississaugua, Ontario;  also Rochester and Buffalo, N.Y.; Bracknell, UK; and Hongkong.  Last June it acquired Dacentec, with 23,000 square feet of datacenter space near Charlotte, N.C., only one-third of it in use, which has been added to the mix.

It offers a mix of services, in the manner of SoftLayer (acquired in 2013 by IBM), including dedicated physical servers as well as multi-tenant virtual servers and hosting services. That, along with its ability to offer cross-border availability zones, just might make it a preferred partner for those firms looking for a recovery site and a closer tie to Canadian customers.

OVH.com is not a name that trips lightly from cloud-conversant lips, but it's one of Europe's largest cloud suppliers. The privately held company is also a proficient builder of infrastructure, having taken a soup-to-nuts responsibility for the design and production of its cloud servers and infrastructure since it started as a hosting service provider in 1999. That's reminiscent of the approach Google and Amazon took as well.

OVH stands for the nickname of private owner Octave Klaba -- Oles Van Herman -- or On Vous Heberge ("We host you," in French). In Europe, it definitely has geographical reach. While headquartered in Roubaix, France, it has opened facilities in the US and Canada and is one of the largest datacenter chains in Europe. It offers facilities in Germany, Italy, Poland, Spain, Ireland, the UK, the Netherlands, Lithuania, and Finland.

It includes a 100 GB automated backup with its dedicated cloud servers and makes extensive use of SSDs on its dedicated servers. It has deployed IPv6, increased IP address availability for its customers, and implemented Domain Name Service Security Extensions, or DNS SEC, closing another known vulnerability. It has built out its own fiber optic net connecting its centers.

OVH.com is a frequent host to VMware-based workloads, like its European competitor, Colt. It is a proficient supplier that gained unwanted notoriety when Amazon expunged WikiLeaks servers from its facilities in December 2010 and WikiLeaks relocated to OVH.com servers in Europe. OVH.com didn't seek WikiLeaks business but announced it would honor its contract rather than give in to pressure, Klaba said.

OVH.com in the UK was also the home of two BitCoin providers, whose servers were compromised last April. The random number generation of its security algorithm wasn't as powerful as it might have been, and the passwords were broken by a brute force attack. 

Nevertheless, OVH is a leading-edge supplier with an ability to extend its services beyond Europe. With concerns mounting over NSA snooping among major web service suppliers, such as Microsoft and Google in the US, it may be positioned for a long-term expansion.

Virtustream, from its earliest days, has attracted financial backing as a potential savvy service supplier. In 2010, Intel Capital was an investor to the tune of approximately $5 million; in September 2013, SAP invested $40 million to aid Virtustream's efforts to host SAP Hana and applications. Why does Virtustream attract the money -- $120 million in all, or about the same amount as invested in Joyent? Rodney Rogers and Keith Reid, co-founders, co-CEOs, and chairman and chief technology officer, respectively, both come out of Adjoined Consulting, which helped companies manage outsourcing and technology issues. Both have experience in application integration, one of the next hurdles that will determine which cloud vendors continue to attract workloads and which fall by the wayside.

Virtustream built up its xStream cloud management platform for its public cloud operation, then made it available as a package for private enterprise cloud operation. In December 2011, it bought Enomaly, with its Elastic Computing management platform that can farm out small units of work to different clouds and manage them. It includes the SpotCloud marketplace, where available capacity can be listed by a primary provider, partners, or even, someday, enterprise private clouds.  

Virtustream has built-in a capability to run VMware's ESX Server workloads and OpenStack's KVM workloads, giving it a stake in two of the principal, emerging public cloud architectures. It offers the same SLAs on each.

It's as if Virtustream decided when it was founded in 2009, as the market was first taking shape, to keep maximum flexibility in its approach. Now, that flexibility may serve it well as VMware customers convert datacenters to private cloud, and OpenStack gains credibility among suppliers. Virtustream operates datacenters in San Francisco, Washington, D.C., and London.

ProfitBricks launched in mid-2012, and since then the Berlin-based company hasn't been heard from much. Nevertheless, it brought a new and highly desirable concept to cloud computing upon launch: End-users should be able to configure the server they want, not the one that the supplier has pre-configured for them, and pay for it by the minute, not hour.

In August, it cut prices and now charges half, or US$61.65 a month (versus $111.40), for a virtual server that is the equivalent to an AWS m1 medium server, even though the ProfitBricks server has identical CPU, memory, and storage resources. (It won't necessarily have the rich database, content distribution, and other Amazon services found alongside EC2.)

ProfitBricks allows users to build small up to large-scale, single servers, rather than offering server clusters as its high-performance compute option. That means a customer may scale-up a large database system or SAP application to as far as he is likely to want to go, as opposed to scaling out multiple servers. The largest single ProfitBricks server offered is now 62 CPUs, each equivalent to a current AMD or Intel core, with 240 GBs of memory. ProfitBricks will provide the CPU horsepower from a single host. If a host doesn't have as many CPUs as a customer wants, then the workload is moved to a new host through management software.

"Most applications and services are meant to scale vertically," not scale out horizontally, said Bob Rizika, CEO of ProfitBricks' US division, in an interview earlier this year. The firm has concentrated on a user interface that makes it simple for customers to select the server features they want.

The firm received an additional $19.5 million in venture funding in March from the German firm United Internet AG, on top of $18.8 million it previously received. Its approach to cloud services was one of 10 finalists at the MIT Sloan CIO Symposium's Innovation Showcase this year.

ProfitBricks has space in the Switch Communications datacenter in Las Vegas and in Telemax datacenters in Karlsruhe, Germany.

You don't hear much about Dimension Data, partly because it's an older company founded in 1983 and headquartered in Johannesberg, South Africa, and partly because its cloud services are focused on emerging markets in its parent company's part of the world, that of the Japanese telecom giant, NTT Group. That means its largest presence is in Asia/Pacific nations, the Middle East, and Africa.

In addition, Dimension Data comes out of the world of systems integration and managed hosting, as opposed to being a cloud pure play, and it maintains a mix of businesses to this day. But it's no slouch in providing cloud services, with communications and integration services tied in. As it began to shift its attention to cloud services, it had 11,000 employees and revenues of $4.7 billion in 2010, operating in 49 countries.

It gained the cloud self-provisioning software and cloud management skills through its acquisition of OpSource in 2011. In July 2013 the Tolly Group, employed by Dimension Data, published benchmarks showing Dimension Data outperformed Amazon and other leading cloud suppliers.

It emphasizes SLAs that offer 99.99% uptime, like Amazon, plus 99.95% percent uptime for the network, a Dimension Data addition. It also guarantees a latency of less than one millisecond  in the transfer of data packets between cloud servers on the same internal cloud network. Its SLAs return more value to the business, as opposed to offering straight replacement time, as Amazon does. For example, an outage of four minutes beyond the allowable limit would trigger a 2% reduction in the monthly bill of the customer, even though four minutes doesn't represent 2% of the month.

Dimension Data is zeroed in on enterprise needs, understands the all-important relationship of network availability to cloud computing, and offers firmer SLAs. The ways that it departs from the Amazon model gives it a chance to grow in those parts of the world where Amazon is weakest.  

And PrivateCore makes one more. It's not literally a cloud supplier, but one day clouds may differentiate on the basis of whether they are or are not a PrivateCore-equipped service. PrivateCore is a 2012 startup in Palo Alto, Calif., that was founded to make public cloud suppliers able to guarantee the privacy of the customer's data under all circumstances. It is the brainchild of former security experts at Google, VMware, and IDF. 

Their timing may be good. PrivateCore claims it can guarantee your servers in the public cloud can't be snooped on by the NSA.

As a demonstration, PrivateCore put its vCage software on a Tor server run by the Tor anonymity network in IBM's SoftLayer cloud. VCage protects data in use by encrypting it in RAM to protect it from NSA-style snooping programs.

PrivateCore's vCage "uses a brilliant design created by experts who knew what they were doing" to move security in the cloud forward, claimed German security consultant Felix Lindner, head of Recurity Labs in Germany. The use of data-in-use encryption eliminates the possibility that your data will be handed over to governmental authorities by your cloud provider.

PrivateCore was founded by CEO Oded Horovitz, former senior staff engineer at VMware who led development of vShield, and Steve Weiss, a former Google senior engineer who received a Google Founder's award for designing Google's two-step verification.

Charles Babcock is an editor-at-large for InformationWeek, having joined the publication in 2003. He is the former editor-in-chief of Digital News, former software editor of Computerworld, and former technology editor of Interactive Week.

Emerging standards for hybrid clouds and converged datacenters promise to break vendors' proprietary hold. Also in the Lose The Lock-In issue of InformationWeek: The future datacenter will come in a neat package. (Free registration required.)

About the Author(s)

Charles Babcock

Editor at Large, Cloud

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive Week. He is a graduate of Syracuse University where he obtained a bachelor's degree in journalism. He joined the publication in 2003.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights