But first we need an underlying cloud fabric that allows for flexibility of data and applications.
Several high-profile organizations have migrated off of the public cloud recently, taking all workloads back onto their own private clouds.
For example, Zynga, HubSpot, MemSQL, and even the CIA made headlines when they moved from Amazon Web Services to private clouds (in the case of the CIA, Amazon is building the organization a private cloud).
A 2013 survey from CompTIA revealed that one-quarter of companies using public clouds are transferring IT services from public cloud providers to on-premises systems and/or private cloud models. With all of the alleged efficiency of using the public cloud, why would so many companies choose to take everything private?
Workload inequality The various types of workloads and even phases of the workload lifecycle present steep challenges for cloud adoption. Testing and development may be commonplace cloud-ready scenarios, but production workloads come in so many flavors that it's a very complex process to move them to the cloud and operate them there.
Each type of workload employs datacenter infrastructure in different ways, which can include:
Numerous tiers of the application
Complex management needs
Varied protection, recovery timelines, and SLAs
Therefore a "one-size-fits-all" approach does not give enterprises the flexibility to manage these disparate needs.
For those companies considering going the other way -- moving workloads from their on-premises datacenters to the public cloud -- the situation is even more complex. To accomplish this, production workloads need to be easily mobilized and centrally managed and protected to interoperate with, and reap the benefits of, the public cloud. But production workloads today are not easily mobilized, because they're siloed.
Breaking the silos Workloads are siloed by the hypervisor, be it VMware ESXi, Microsoft Hyper-V, Citrix Xen, or Redhat KVM. Workloads simply cannot move between these hypervisors easily. Some processes need to take place behind the scenes, and these processes require effort both by end-users and the hypervisor.
They are also siloed by hardware, with "vendor lock-in" to a specific brand of hardware making it difficult to move workloads for applications like disaster recovery and data migration. Big storage likes it this way. Differentiated workloads will not change or become magically interoperable with any other datacenter environment, so a move to the cloud is fraught with problems from the start.
Workloads are also siloed in clouds. Using a cloud computing model allows for converged infrastructure and shared resources across departments or across clouds. However, without shared management of these resources across cloud platforms, customers get locked in to one specific provider. Cloud providers often use proprietary infrastructure and tools, making them incompatible with other clouds.
So where can enterprises turn? Lately, there has been a lot of buzz around the hybrid cloud, where an organization manages some resources in-house and has other resources provided by an external cloud provider.
The hybrid cloud is interesting to IT departments and CIOs because it allows for cost reduction, cloud bursting, server migrations, disaster recovery, and data portability. Gartner estimates that 70 percent of enterprises will pursue the hybrid cloud by 2015. Unlike the public cloud model, hybrid cloud diverts far more control to the IT department, allowing for greater flexibility and easier management of workloads. But how does a hybrid-cloud-based datacenter avoid or remove the datacenter silos?
We need a "cloud fabric" For the hybrid cloud to reach widespread adoption by removing these silos, we need to see the development and adoption of an underlying infrastructure layer that allows for seamless flexibility of data and applications across clouds, hypervisors, networks, and hardware -- a concept I like to call the "Cloud Fabric."
My company Zerto has identified the key functionalities that production workloads need to utilize any cloud. Zerto will be rolling out products in the next year that support the core principals of the cloud fabric concept (which is not Zerto's concept alone).
I see four critical components of the cloud fabric layer:
A powerful transport layer for data and applications, one that is cross-hypervisor and hardware agnostic
Orchestration of the mobility of complex applications
Encapsulation of all of the dependencies that are part of an application, such as boot order and IP configuration
Production-level tools for the highest service levels of data mobility and protection, so that mobility of workloads is easy to manage and report on
Enterprises will be able to move production applications between clouds without incurring downtime, and without changing configurations between sites. The hybrid cloud will be able to protect and recover applications without the need to purchase storage from the same manufacturer for both production and recovery sites.
In this "brave new world" organizations will be able to manage applications through a single console, no matter if those applications reside on-premises or in a cloud datacenter.
Private clouds are moving rapidly from concept to production. But some fears about expertise and integration still linger. Also in the Private Clouds Step Up issue of InformationWeek: The public cloud and the steam engine have more in common than you might think. (Free registration required.)
Ziv Kedem is co-founder and CEO of Zerto. Previously, he was a founder of Kashya Inc., where he served as CTO and developed a widely used storage replication solution for disaster recovery. Ziv sold Kashya to EMC for $160 million in 2006. View Full Bio
Multicloud Infrastructure & Application ManagementEnterprise cloud adoption has evolved to the point where hybrid public/private cloud designs and use of multiple providers is common. Who among us has mastered provisioning resources in different clouds; allocating the right resources to each application; assigning applications to the "best" cloud provider based on performance or reliability requirements.
Join us for a roundup of the top stories on InformationWeek.com for the week of April 24, 2016. We'll be talking with the InformationWeek.com editors and correspondents who brought you the top stories of the week!