Private cloud has not been the path of choice for many IT staffs, and critics say there's a good reason for that: the public cloud is more efficient. But Jonathan Bryce, executive director of the OpenStack Foundation, offers a counter argument.
During a recent visit to San Francisco from OpenStack headquarters in Austin, Texas, Bryce spent an hour with InformationWeek talking about how the options for private cloud are looking better than they've ever been. That's mainly because much has been learned through hard experience, where early OpenStack implementers found the cloud software was complicated, frequently changing and incorporated a number of concepts that were still being evolved, such as its networking platform, Neutron.
Early private cloud software consisted of a set of choices offered by early stage OpenStack, an alternative, CloudStack, and Eucalyptus, an open source company offering a private cloud built around Amazon-compatible APIs. "All of these were immature and difficult to use in their own way, if you didn't have a deep IT staff," Bryce conceded at the start of the interview.
But that was private cloud, generation one. We're now on the second generation of private cloud and the upcoming, Ocata, release of OpenStack in the latter half of this month will show how compute, network and storage have matured under OpenStack's continued development.
OpenStack is making it easier to implement containers in its cloud environment as well as software-defined networking and storage. "Containers are an application technology. They still need infrastructure to run on. A recent 451 Research report said OpenStack makes it easier to run your containers with fewer people," said Bryce. Open source Kubernetes, Mesos or Docker Swarm are part of that picture. Nevertheless, OpenStack is there to provide essential infrastructure services, he said.
Furthermore, it's clear that the public cloud is not the most cost-efficient option for every enterprise. Bryce pointed to the initial public offer documentation for the Web photo sharing service, Snap, as an example. Its amended S-1 filing with the SEC lists as a prospective expense a $1 billion contract with Amazon Web Services for five years of "redundant infrastructure support of operations" and a $2 billion expense with Google Cloud Platform over five years for "cloud services on which Snap primarily relies" for compute, networking and storage.
That $3 billion would go a long ways toward establishing an effective private cloud for Snap, suggested Bryce, but he doesn't hesitate to say that the public cloud can be the right infrastructure for a young company when it's growing fast and can't necessarily predict its ultimate scale of operations.
Inside OpenStack, there's a project called Zun for large scale container management, a vendor neutral way of tracking containers across different orchestration engines (such as Kubernetes, Mesos, Rancher, or Docker Swarm), knowing where they are, who has access to them and how they're being used, said Bryce. A presentation on Zun was featured during the Oct. 23-28, 2016, OpenStack Summit in Barcelona.
Enterprises interested in moving to containers as should consider running a private cloud under OpenStack, with its virtual machine, container and bare metal management capabilities. They probably shouldn't try to implement one by themselves. The number of moving parts still makes it a challenging process, Bryce acknowledged.
But with help from OpenStack experts at Red Hat, Cannonical, Mirantis, CSC or HPE, among other places, they can get to a private cloud that may cost less to operate in the long run than subscribing to the leading public cloud services.
"Some users of data and I/O intensive applications in particular see 50%-90% cost savings" as they establish their own internal cloud operation instead of resorting to the public cloud, Bryce said. That's in part because high performance I/Os carry an additional fee inside AWS. Moving data out of the cloud back to on-premises results in additional fees.
A set of predictable, "steady-state applications don't need the premium of operating in the public cloud," he said. Public cloud use is best reserved for applications with hard to predict scalability requirements, typical of young and fast-growing companies.
Not everybody agrees with that conclusion. Joe Emison, CIO of Exceligent, writes in this Interop research report that the private cloud is in retreat and the public cloud is starting to dominate IT staff thinking.
But Bryce started out in the public cloud services as the founder of Mosso, one of the first public clouds, which started as a cluster inside Rackspace. He's watched OpenStack mature to become a vendor-neutral, virtual machine and container platform. Private cloud may not be for everyone, but if it fits the workloads an IT staff thinks it's likely to be hosting long term, then it may be time to give OpenStack a second look, he said.
Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Cybersecurity Strategies for the Digital EraAt its core, digital business relies on strong security practices. In addition, leveraging security intelligence and integrating security with operations and developer teams can help organizations push the boundaries of innovation.