Cloud // Infrastructure as a Service
News
7/5/2011
10:22 PM
Connect Directly
Twitter
RSS
E-Mail
50%
50%

7 Self-Inflicted Wounds Of Cloud Computing

Don't let poor planning and half-hearted decisions doom your promising cloud projects.

Anthony Skipper at ServiceMesh assembled a comprehensive list of the common holes in companies' approach to cloud computing for his presentation at Cloud Expo in New York in June.

Skipper is VP of infrastructure and security at ServiceMesh, a supplier of IT service management and lifecycle governance. His presentation was titled, "Cloud Scar Tissue: Real World Implementation Lessons Learned From Early Adopters." It was also cited by cloud blogger Andrew Chapman.

Skipper explained seven self-inflicted wounds. I've tried to warn about some of these myself, but I find Skipper's list nearly complete. I'm repeating them after reviewing his slide presentation--and adding a few of my own thoughts, too.

Self-Inflicted Wound 1: Not Believing That Organization Change Is A Requirement

Skipper says delivering an integrated cloud solution in a timely manner "is hard to do with an IT organization separated into fiefdoms based around storage, network, compute, and platform." Cloud computing automates the provisioning of new servers and reviewing requests for them, "freeing up a significant amount of capacity with your carbon-based units, which is good because you need them to help in all the new kinds of roles." They include: defining policy, migrating legacy solutions, or building new stateless, auto-scaling solutions, supporting continuous delivery processes.

This is a neat summary, once you decipher those "carbon-based units." He means, of course, the human capital, the IT staff. Implementing cloud computing tends to erase well-established boundaries, forcing the network manager, system administrator, and storage manager to work together to come up with server archetypes that are acceptable to major segments of the business. Their collaboration allows definitions to be laid down and policies set around what types of servers end users may have. Having done so, the provisioning of new servers can be automated, and IT staffers are freed up to meet the new agenda named above.

Skipper concluded, "Architects are not just for ivory towers."

My interpretation: the enterprise IT architect can play a key role in setting requirements for the cloud infrastructure and defining server templates. Infrastructure architects are often viewed as disconnected from and little involved with the business realities that line-of-business people face. The closer the architect is to the nitty gritty action of the business, the better match the cloud services that he designs will be for business operations. Servers can be designed to be more secure or less secure; operate with lots of memory and caches for high performance or in a less expensive and slower fashion; have lots of I/O bandwidth or little. It's the architect's job to make choices that are best for the whole organization and come up with an integrated cloud computing plan that serves it. It's not easy, but implementing the new paradigm offers a chance of achieving an old goal: aligning IT services with the business.

Wound 2: Boiling The Ocean

As Skipper said: "Trying to do all compute platforms at once doesn't make sense." His points were: "Mainframes aren't going anywhere quickly. Most Solaris/Power architectures have already migrated to zones. It's not viable to move all your applications at once. You need time to learn and for your organization to adapt. Some percentage of applications will need work to be ported. Some applications are simply not a good fit for cloud at the current time. The more cloud providers you use, the more scattered your infrastructure and the higher your costs."

There's a number of points to embellish here, and I'm sure Skipper did so during his Cloud Expo presentation. But I didn't get to sit through it. So here's my take: There's no equivalent to the mainframe in the cloud; cloud computing is essentially an x86-server creation. It's hard to even test a mainframe connection in a cloud application. All you can do is simulate and hope that in real life, it will work as expected.

So don't boil the ocean and assume you will forcibly push mainframe apps into the cloud, a Sisyphean task if there ever was one. Likewise, the work running on Sun Sparc and IBM Power servers doesn't fit easily into the cloud. Besides, IBM and Sun adopted strong virtualization technologies of their own, where one copy of the operating system manages several applications but the applications have been placed in isolation zones. Some enterprise applications will need to be ported to x86; SAP and Oracle have already done so, but not all enterprise applications move easily to Intel architecture. Trying to find cloud providers that satisfy custom needs guarantees your workloads will be spread around and, in the end, that much harder to manage.

Previous
1 of 3
Next
Comment  | 
Print  | 
More Insights
Multicloud Infrastructure & Application Management
Multicloud Infrastructure & Application Management
Enterprise cloud adoption has evolved to the point where hybrid public/private cloud designs and use of multiple providers is common. Who among us has mastered provisioning resources in different clouds; allocating the right resources to each application; assigning applications to the "best" cloud provider based on performance or reliability requirements.
Register for InformationWeek Newsletters
White Papers
Current Issue
InformationWeek Tech Digest - September 10, 2014
A high-scale relational database? NoSQL database? Hadoop? Event-processing technology? When it comes to big data, one size doesn't fit all. Here's how to decide.
Flash Poll
Video
Slideshows
Twitter Feed
InformationWeek Radio
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.