OpenStack has evolved to a point where it is producing benefits for IT organizations and service providers, but also is surrounded by myths.

Guest Commentary, Guest Commentary

February 23, 2017

6 Min Read
Marc Wilczek

OpenStack is becoming a strategic choice for many organizations and service providers alike. Since its inception in 2010, OpenStack has experienced an impressive growth as it marches toward becoming the de facto standard for new cloud deployments.

Essentially, OpenStack is an open source-based cloud platform that entails the orchestration of compute, storage, and networking resources in a virtualized data center. It’s built upon commodity hardware, and managed through web dashboards and APIs. The OpenStack community is growing constantly. Today, more than 200 well-known vendors contribute to the code, including Cisco, Dell, HP, IBM, Intel, Oracle, Rackspace, Red Hat, and VMware.

451-openstack.jpg

451 Research Group estimates OpenStack’s ecosystem to grow nearly five-fold in revenue, from US$1.27 billion market size in 2015 to US$5.75 billion by 2020.

Why do organizations opt for OpenStack?

In a recent survey, 97% cited standardizing on a common open platform across multiple clouds among their top considerations. Avoiding vendor lock-in was another important factor, cited by 92%. Additional reasons include: OpenStack compatibility requirements from customers; cloud-native app deployment; vendor partnerships; research; data governance; DevOps-friendliness; and self-service and open source qualities.

In its early days OpenStack was primarily used for non-critical internal workloads such as test and development. Since then, companies are increasingly using it in a production environment, especially for cloud-native apps.

What are typical deployment models?

While various combinations of models exist, for simplification purposes, below is an extract of some of the most common ones.

On-Premises Distribution: On-premises is still the most frequently used deployment model. It can be implemented in a do-it-yourself (DIY) approach utilizing Homebrew or one of the vendor distros. In this scenario, the entire OpenStack environment is run on premises. Internal IT is usually in charge of deployment, configuration, patch and release management, as well as troubleshooting. With ample engineering resources, experienced in OpenStack, in place this deployment model can be cost effective. However, when resources are scarce or time-to-market matters a great deal, it might not be the model of choice.

Private Cloud in a Box: Some vendors offer appliances, which are usually run on-premise. They come with an embedded and vendor-supported OpenStack distribution, specifically tailored this particular setup. While these appliances require less engineering effort compared to an on-premise distribution, they tend to be relatively costly. Since they are built upon proprietary hardware and customized code, appliances also contradict the notion of avoiding a vendor lock-in, which is what most customers are actually striving for.

Hosted Private Cloud: Unlike a DIY approach, this model leverages a third-party data center of a service provider, who hosts a private cloud. In this case, the service provider owns the infrastructure and takes care of operating the environment, governed by a service level agreement (SLA). Customers can benefit from an existing IT landscape and the service provider’s OpenStack expertise, without having to make CAPEX investments or building up in-depth expertise themselves. On the flipside: The service provider has design authority, which leads to a vendor lock-in. Furthermore, a WAN connection is needed, and the existing customer environment gets fragmented and might be underutilized.

[Want to learn more about OpenStack, read Private Cloud Merits Second Look As A Container Environment.]

OpenStack-as-a-Service: Like a hosted private cloud, OpenStack-as-a-Service leverages a third party provider who takes care of everything. However, the stack is accessible via the Internet and leverages public cloud resources, i.e., it is run in a shared rather than a dedicated environment. This model offers a greatest degree of agility and flexibility, allowing customers to commission and decommission resources at a blink of an eye, and conveniently settle the bill via credit card on a pay-per-use basis. Perhaps the easiest and fastest option to get going, but – depending on the availability zone – regulated companies might have to deal with privacy concerns. Moreover, the model ultimately leads to a vendor lock-in.

Misperceptions when it comes to OpenStack

No doubt, OpenStack has plenty of benefits. But despite featuring a service-oriented architecture, an API-driven access to all components, and the ability to orchestrate multiple virtualization technologies, it’s far from being a silver bullet. Here are some of its limitations:

Myth 1: OpenStack is a ready product

OpenStack is an open-source project. It is complex to deploy and more of a toolkit rather than a product. The deployment therefore can be tedious and the needed efforts are widely underestimated. Furthermore, top-notch developers and engineers are hard to find. In a recent survey, users quoted the “complexity to deploy and operate” among the main reasons for not recommending OpenStack.

Myth 2: OpenStack is incredibly cheap

At first glance – especially when using one of the public distros – there is little or no cost associated with purchasing the code. Yet building a reliable cloud platform is something different altogether. In fact, it’s going to be a challenging exercise, requiring plenty of resources, know-how and a lot of time and perseverance. When taking into account the hours of engineering work and headcount needed, the total cost of ownership (TCO) will be much higher than initially perceived.

Myth 3: Maintenance and patching is straight forward

Moving from one OpenStack release to another can be a challenging exercise. After having previously experienced unexpected issues and downtimes, many companies are hesitant to upgrade onto new releases. In particular at large levels upgrading and achieving stability is not that easy.

Summary

OpenStack is powerful cloud technology, but the complexity of deploying and operating it is often underestimated. There is certainly no “one fits all” deployment model; some of them were highlighted before. Changing a few parameters will lead to different recommendations. Therefore, before making any purchasing decisions, it’s vital to create a target picture, thoroughly build a business case with planned volumes (# of instances/nodes, storage etc.), and scrutinize all available options. Another example of a cost comparisons can be found here.

When seriously considering a DIY approach, organizations need to make sure they have the necessary skills on board and available overtime to avoid a hard landing. Whenever time-to-market matters, leveraging a service provider might be the smarter choice.

The OpenStack SDK has promise, but a lot of work is still ahead. There is lack of talent – demand outweighs supply. The resilience and stability is not yet completely on-par with other proprietary cloud platforms. As OpenStack gains maturity, more and more companies will take advantage of it and incorporate it into their multi-cloud strategy.

Marc Wilczek is an entrepreneur and senior executive with extensive experience in helping market-leading ICT companies to transform themselves, expand into new fields of business and geographies, and accelerate their growth. He began his career as one of the youngest entrepreneurs in the German state of Hesse when founding two successful IT start-ups in the 1990s, and since then has held various leadership roles within the ICT industry over the past two decades. At present, he serves as Vice President Portfolio, Innovation & Architecture for T-Systems, the corporate customers unit of Deutsche Telekom. In this role, he heads the “productization” efforts for the company’s cloud and hosting business worldwide.

About the Author(s)

Guest Commentary

Guest Commentary

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights