CNET's James Urquhart's recent "A maturity model for cloud computing" describes the following five stages of evolution "for an enterprise data center trying to achieve cloud Nirvana."
- Consolidation is achieved as data centers discover ways to reduce redundancy and wasted space and equipment by measured planning of both architecture (including facilities allocation and design) and process.
- Abstraction occurs when data centers decouple the workloads and payloads of their data center infrastructure from the physical infrastructure itself, and manage to the abstraction instead of the infrastructure.
- Automation comes into play when data centers systematically remove manual labor requirements for run time operation of the data center.
- Utility is the stage at which data centers introduce the concepts of self-service and metering.
- Market is achieved when utilities can be brought together over the Internet to create an open competitive marketplace for IT capabilities (an "Inter-cloud", so to speak).
Dr. Dobbs's Jake Sorofman has the following five levels in his Cloud Computing Adoption Model:
- Virtualization. The first level of cloud adoption employs hypervisor-based infrastructure and application virtualization technologies for seamless portability of applications and shared server infrastructure.
- Cloud Experimentation. Virtualization is taken to a cloud model, either internally or externally, based on controlled and bounded deployments utilizing Amazon Elastic Compute Cloud (EC2) for compute capacity and as the reference architecture.
- Cloud Foundations. Governance, controls, procedures, policies, and best practices begin to form around the development and deployment of cloud applications. Initially, Level 3 efforts focus on internal, non-mission critical applications.
- Cloud Advancement. Governance foundations allow organizations to scale up the volume of cloud applications through broad-based deployments in the cloud.
- Cloud Actualization. Dynamic workload balancing across multiple utility clouds. Applications are distributed based on cloud capacity, cost, proximity to user, and other criteria.
Earlier this year, ZapThink's Ron Schmelzer debunked the whole idea of SOA maturity models in an essay, "Forget Maturity Models -- It's Time for an Agility Model."
"It's becoming clear that the industry doesn't really need a SOA maturity model. The act of doing SOA properly in itself is an act of architectural maturity that many companies are having trouble grasping. Companies are trying to understand how to best apply SOA and realize the benefits against their own stated business goals. As such, what's not needed is an abstract, enterprise-wide, industry-wide, artificial measure of maturity that complies with CMMI's five levels, but rather a way of measuring the state of a SOA implementation against the fundamental goal of SOA itself: agility."
Measuring agility on a scale of 1 to 5 (as almost all maturity models do), is a pointless exercise. Simply put, not all service-oriented projects need to have the same level of agility as others. Some projects require deep levels of loose coupling. Other projects might not need the same amount of loose coupling since each layer of coupling adds flexibility at the potential cost of complexity and efficiency."
Schmelzer goes on to make a persuasive argument that proposes a broader metric, based on his earlier Seven Levels of Loose Coupling, that could be used to measure the appropriate agility of a specific project or initiative. Whether or not you agree with Schmelzer, and his main point that "not all service-oriented projects need to have the same level of agility as others," much of his criticism of SOA Maturity Models can be directly applied to Cloud Maturity Models, especially if your focus is on software as a service or even capacity as an automated, self-administered service, which is what many organizations seem to be looking for in their current cloud experiments.
It strikes me as more nebulous than Nirvana, if the highest level of a Cloud Maturity Model is a measure of an organization's overall skills, policies, consistency and practices when developing cloud applications -- but your measurement isn't capable of rating specific projects. If your Cloud Maturity Models does let you rate specific projects, how do you factor in projects where cloud-less local storage (or even cheaper, slower cloud storage) make the most sense? Case in point is the real-world example I recently wrote about a cloud application that used both solid-state drives (SSDs) and hard-disk drives (HDDs) in a complementary Video on Demand application, where "hot content" that needed the fastest possible IOPS (streaming new releases or the most popular movies) relied on performance-optimized SSDs, while "cold content" that needed the largest possible capacity for storing thousands of classic movies used capacity-optimized HDDs. To quote Schmelzer again:
"Most SOA maturity models fall into one of the three camps: ill-defined, abstract models of maturity that are primarily based on Service implementation rather than Service architecture, vendor-driven maturity models that attempt to push customers through SOA infrastructure buying decisions, and consultant-driven maturity models that attempt to push customers through architectural exercises that have not proven to truly advance the state of SOA."
If SOA maturity models are ill-defined and confusing, Cloud Maturity Models make even less sense.