Rackspace has launched in beta a container service called Carina that creates clusters for containerized workloads that customers bring to it. Rackspace systems and system operators then manage the cluster so the customer doesn't have to.
"Carina is a way we can make the container resource available very quickly. It's up and running much more quickly than a virtual machine," said Adrian Otto, distinguished architect at Rackspace and leader of the team that created Carina, in an interview.
Rackspace publicly announced Carina at the OpenStack Summit in Tokyo on Oct. 27.
Generate a virtual machine cluster and launch virtual machines onto it generally takes at least several minutes, Otto said. Carina can create a cluster in 45 seconds and launch a containerized workload a few seconds later.
The containers are running on bare-metal Linux hosts. That means the Carina service can be used for things that need fast startups and shutdowns, too fast perhaps for the public cloud.
Otto gave on example of a developer using an O'Reilly Media online computer-language course. The course presents the developer with a sample of code, which he tweaks in the manner that he wants, then clicks on a button to run his creation. Since the course is hosted on Rackspace, Carina generates a Docker container with the code in it and launches it on a server. The developer sees the results of his coding almost immediately.
It's a much more effective way to teach, but if it relied on pauses of several minutes it would lose its immediacy and run up more significant costs, said Otto.
"The code executes in a few seconds, then the container goes away. There's no VMs sitting around idle, fully provisioned, waiting to start up," and incurring public cloud charges by doing so, he pointed out. A video demonstrating O'Reilly's use of Carina is posted on YouTube.
In an online development setting, the use of containers can provide for quick setup and tear down when it comes to dev/test.
In a blog post announcing Carina on Oct. 27, Otto noted that most public clouds running containers services, which would include Amazon Web Services, launch them into virtual machines. "Virtual machines have both advantages and disadvantages. They cause application code to execute more slowly than it would on a bare-metal server environment. That's bad. They are easier to secure for multi-tenancy. That's good," he wrote.
[Want to learn about the latest in container security? See CoreOS Service Scans Containers For Vulnerabilities.]
Without containerization, it's very expensive to keep one customer's workload separated from another's by continually launching bare-metal servers dedicated to particular customers. "You end up adding a full server at a time. That's a lot of hardware!"
The Carina service, however, can provide isolation for applications in containers, which is sufficient for some types of workloads, such as the O'Reilly Media online courses or for dev and test of non-mission-critical code. Where isolation doesn't have to be the top priority, Carina provides "a way to scale up using smaller increments (of bare-metal capacity) that more closely tracks the needs of your dynamic workload," he wrote in his post.
Carina is integrated with Docker operations so a customer with an application in a container stored on the Docker Hub -- an image -- can pull that image from a Carina account and launch it into the Rackspace Managed Cloud. The cluster needed to run the container will be detected and built by Carina. Such a move would constitute a normal workflow under the Carina service, Otto said in the interview.
Amazon offers its EC2 Container Service and Google, an early expert in container use, offers Google Container Service on its public cloud. But Otto said Carina is designed for ease-of-use by container newcomers through its integration with Docker tools and the Docker Hub.
"One thing we showed in Tokyo was my 10-year-old son, Jackson, demo-ing the (Carina) software. If my 10-year-old son can do it, any developer can do it," said Otto.
A Google customer will become a user of Kubernetes -- the basis for Google's internal container management, and now an open source code project. Otto suggested that not every container user would necessarily want to become a long-term Kubernetes user.
In addition to being a distinguished architect at Rackspace, Otto is project technical lead at the Magnum Project, the container-management module that's part of OpenStack. Because Carina is based on the concepts behind Project Magnum, it can offer its own container cluster launching mechanism. It can offer Docker Swarm cluster management and other Docker tools, Apache Mesos cluster management, and open source Kubernetes.
Magnum is OpenStack's container service for public cloud operators. It was designed not to favor one operator's approach over another's, Otto said. Different management systems may be integrated into it.
**New deadline of Dec. 18, 2015** Be a part of the prestigious InformationWeek Elite 100! Time is running out to submit your company's application by Dec. 18, 2015. Go to our 2016 registration page: InformationWeek's Elite 100 list for 2016.