Why use an entire operating system just to run one application? Containers-as-a-service offer a smarter way to run applications in the cloud.
Many virtual machines are being spun up in the cloud to run a single application. Often, the resources consumed by that application are dwarfed by the size of the operating system in terms of memory, disk space and CPU utilization. So why run an entire OS just to run one application? That is one problem virtual containers were created to solve.
Containers-as-a-service is a type of infrastructure-as-a-service specifically geared toward efficiently running a single application. A container is a form of operating system virtualization that is more efficient than typical hardware virtualization. It provides the necessary computing resources to run an application as if it is the only application running in the operating system -- in other words, with a guarantee of no conflicts with other application containers running on the same machine. For agencies and enterprises moving applications to the cloud, the containers represent a smarter and more economical way to move to the cloud.
In traditional hardware virtualization, a hypervisor (either software or bare metal) can run one or more guest operating systems. Each operating system acts as if it is in control of the entire machine. With containers (which are currently implemented on Linux, BSD and Solaris), applications can be virtualized more efficiently and run as if they control the entire OS user space. For example, a container can be rebooted, have root access, IP addresses, memory, processes, files, applications, system libraries and configuration files.
An important distinction in this operating-system-level virtualization is that by OS virtualization, we don't mean the kernel, just the system libraries and binaries to allow isolation between containers. This use of the kernel across containers is similar to a hypervisor but is much more efficient and does not allow different guest operating systems. A container is an isolation unit in a single OS (in this case, Linux).
The most obvious benefit of Linux containers is that they are much more efficient in terms of memory, drive space and CPU utilization than hardware virtualization because they save the cost of the OS-overhead in each virtual machine. You can run many more containers on the same hardware as you could run virtual machines. Additionally, there is no boot time with Linux containers, so spinning up new containers is an order of magnitude faster than booting an entire operating system.
What does this mean for the future of infrastructure-as-a-service? Containers are a more efficient competitor to hardware virtualization, and many platform-as-a-service implementations -- including Heroku, OpenShift, dotCloud and CloudFoundry -- use containers. Additionally, some of the private cloud IaaS implementations, like OpenStack and Cloudstack, offer support for containers. So containers are a viable new type of virtualization that will continue to grow and influence the direction of cloud computing.
Additionally, as cost competition in the IaaS space heats up, CaaS could become a factor in competitiveness due to its greater efficiency and performance against hardware virtualization technologies.
There are a few ramifications of CaaS to keep in mind:
-- Since most CaaS activity is on the Linux operating system, CaaS will strengthen, if not cement, Linux's leadership position in the cloud. In most cloud providers, Linux operating systems are a cheaper alternative to Windows and run on smaller configurations that require less memory and disk space. Additionally, Web applications are typically more platform-neutral and therefore run equally well on Linux or Windows operating systems. Of course, if the application is specific to Windows technologies (like ASP.net), it must run on a Windows operating system. However, Windows instances take up to nine times longer than Linux instances to start up, according to one performance study.
-- CaaS allows real-time cloud-native applications. Demonstrating cloud-based applications can be tricky with traditional virtual machines because each one can take up to five minutes to start up. That startup time is mostly due to the boot time of the operating system. Containers eliminate that boot time and start up in seconds. That improvement in start time makes containers able to become a new base unit for distributed applications instead of using threads. Why? Containers offer a greater degree of isolation and looser coupling than threads. The isolation provides a greater degree of reliability, in the same way that Google Chrome chose process isolation over threads to improve reliability. In distributed cloud applications, reliability and loose coupling are the centerpieces of a robust application.
-- CaaS will spread to all major operating systems. Of course, this prediction is based upon another prediction: the cloud is inevitable. A testament to this growth in CaaS interest are the new CaaS implementations that are popping up, including Google's lmctfy (let me container that for you), Heroku's Dyno and CloudFoundry's Warden. These are in addition to other containers such as Docker, lxc, OpenVZ, BSD Jails and Solaris Zones. MacOS also has something called an App Sandbox (similar to the Java Sandbox concept). Windows also has an application sandbox concept, but it is important to note while there are some similarities between a sandbox and a container, the two are different. A sandbox usually revolves around security protections for an application rather than around the broader requirements of application isolation.
-- The CaaS concept will continue to evolve, similar to the way Java Virtual Machines evolved into a form of application sandbox for Java bytecode-based applications and even the J2EE Web containers and EJB (Enterprise JaveBeans) containers evolved into higher-level forms of containers. All of these isolation concepts are important; they support different but intersecting audiences and can help forge a better understanding of what is needed to efficiently and safely run applications in the cloud. The entire container/app sandbox/app engine concept will continue to improve and evolve.
Finally, CaaS is an important part of the evolution of cloud computing. In my new book, The Great Cloud Migration, I discuss the role and manifestations of this cloud evolution and its impact on migrating applications to the cloud. CaaS is not the only way in which clouds are evolving. Some other areas of evolution include the blurring lines between PaaS and IaaS, how the "Internet of Things" is influencing the cloud and cloud interoperability. Of those, CaaS is the most significant change as it affects the foundational components of cloud computing. Such disruption is a good thing as it pushes the cloud to greater levels of efficiency and greater avenues for disruption of traditional IT.
2014 Next-Gen WAN SurveyWhile 68% say demand for WAN bandwidth will increase, just 15% are in the process of bringing new services or more capacity online now. For 26%, cost is the problem. Enter vendors from Aryaka to Cisco to Pertino, all looking to use cloud to transform how IT delivers wide-area connectivity.
Server Market SplitsvilleJust because the server market's in the doldrums doesn't mean innovation has ceased. Far from it -- server technology is enjoying the biggest renaissance since the dawn of x86 systems. But the primary driver is now service providers, not enterprises.
InformationWeek Must Reads Oct. 21, 2014InformationWeek's new Must Reads is a compendium of our best recent coverage of digital strategy. Learn why you should learn to embrace DevOps, how to avoid roadblocks for digital projects, what the five steps to API management are, and more.