What you need, where you need it, when you need it.
One of the core goals of cloud computing can essentially be summed up that way. Whether you need application access, more processing power or memory, more data capacity, or more data, cloud computing enables rapid scaling up and down.
“Up and down” is also important. Just as rapidly as a given cloud user can request and access a resource, it is just as important to enable equally rapid release of those resources when done. This is also a core goal of cloud computing, the ability to dramatically reduce operating cost, especially through optimized utilization of pooled resources.
Containers Continue Changing The Cloud
The underlying fabric, the way in which we actually use cloud computing to execute applications and manage workloads, is rapidly advancing.
Previous to cloud, and even in the earliest days of the transition to cloud computing, applications were monolithic assemblies of code running on a processor and obtaining or recording needed data on large network-attached storage (NAS) appliances. Scaling up meant adding more drives to the appliance. These were usually highly specialized and therefore more expensive. If a flaw developed in the software, it usually brought the entire application down. This model continues to be used in many environments to this day.
More recently, applications have become an assembly of microservices, individual processes that are each containerized along with all of the resources they require to execute, including specifications for where and what storage they require. This is highly consistent with the fundamental architecture of cloud-connected networks. These containers can be instantiated wherever they will be most efficiently used, and if a flaw develops in one of them, that container is simply discarded and re-instantiated. The entire application continues to run. This provides very high resilience and reliability.
During the earliest days of the introduction of containerized microservices, storage was still provided from monolithic NAS appliances, creating a significant performance reduction as every storage request was required to transit the network.
Software-Defined Storage Matches Well With Containerized Microservices
Software-defined storage solved the storage challenge.
First, it removed the software intelligence from the device. This eliminated the need for bespoke NAS appliances and replaced it with less-expensive commodity storage hardware deployed throughout a cloud network. Running the storage management on servers also returned tremendous flexibility to the control of network management. Now data could be distributed to clusters of servers and storage devices throughout the network, putting large portions of needed data much closer to where it was needed, and adding the resiliency of redundant instances of data segments. This, in turn, allowed for the possible failure of storage hardware. If one drive went out of service, replicates of all of its stored data segments where obtainable elsewhere.
Shared Service Means Shared Resources
Shared service providers, such as telephone companies, hosting services, and related businesses, all stand to gain much from the marriage of containerized microservices and software-defined-storage.
In this environment, everything is software-defined and therefore readily changeable. Solutions like Google Kubernetes or Mesos facilitate and automate orchestration of these resources making it possible for shared service providers to accommodate customers providing whatever they need, wherever they need it, whenever they need it.
These are just some of the advantages SSPs are enjoying thanks to the combination of containers and software-defined-storage. To learn more, contact your Red Hat representative and ask about Ceph storage clustering.
Sebastien Han currently serves as a Principal Software Engineer, Storage Architect for Red Hat. He has been involved with OpenStack and Ceph Storage since 2011 and has built a strong expertise around these two areas. Curious and passionate, he loves working on bleeding-edge ... View Full Bio