Cloud expert David Linthicum, speaker at Interop 2013 New York, shares advice on why an IT manager should play cloud architecture detective.
Whether you're building an enterprise private cloud or buying a public cloud service, the underlying architecture of what you end up with will have an impact on what you can do.
"People have a tendency not to think about architecture," noted David Linthicum, senior VP at Cloud Technology Partners in Cambridge, Mass. and a speaker in the Cloud and Virtualization track at Interop New York 2013. And cloud vendors, including Amazon Web Services and Google Compute Engine, tend not to disclose much about their underlying architectures.
Clouds may have some elements in common, but the specific goals behind the selection of their parts and their method of assembly actually differ greatly. "The architectural differences can show up in a big way," Linthicum warned. When little information is available from the vendor, he added, potential cloud users may have to "play private detective to discover what is going on behind the scenes."
The author of 13 books and a frequent conference speaker, Linthicum is former principal of Blue Mountain Labs, a cloud computing consulting firm. Linthicum will speak on "Getting Cloud Architecture Right the First Time" at 2:45 p.m. Oct. 2 at Interop.
In an interview, Linthicum pointed out that in many instances use of cloud services is not exercised in a very good way. Without elaborating on exactly how his Interop talk will address that issue, he did share a few pointers.
First, effective IT managers should map out their organization's specific goals in moving to the cloud, then search for the cloud service that seems most oriented toward meeting them. Unlike Facebook, which publishes the details of its infrastructure, Google offers little information about its own. Google Compute Engine, however, sits on the same infrastructure as Google Search. Based on our understanding of Google Search, an IT manager can then surmise that Google Compute Engine is designed for many parallel operations and speed of execution.
Linthicum added another interesting detail: if you use Google Compute Engine, your workload will not be placed in a virtual machine. That would slow down the Compute Engine, and the customer's workload. Google's architecture doesn't go there.
On the other hand, Amazon Web Services does. It puts each customer's workload in a virtual machine -- an Amazon Machine Image -- under an adapted form of the Xen open source hypervisor. The overhead of virtualization is something that Amazon wants to work with because a workload arriving as a set of virtual machine files can be automatically provisioned with CPU, memory, network and storage. Those resources can be managed automatically, which is clearly an architectural goal at Amazon based on how the company manages customers and launches new, fully automated services. Speed is a secondary consideration.
"I think Google has a different set of architectural principles from Amazon," said Linthicum. Understanding how the two differ helps IT managers understand how each service will excel and where they may fall short.
Another way to highlight the differences is to acknowledge that Amazon, like Google, values performance. Unlike Google, however, Amazon aims to eke out the most performance possible from a more limited amount of resources per user. If that sounds like a dig, remember that not only is AWS the market share leader, it's also the market's low-price leader (or it's at least vying for that distinction) across several server types.
IT managers can surmise the architecture of any public cloud vendor on their own. One way to do this is test-drive its services. If the vendor offers bare metal servers, they might be good for running a latency-sensitive database system. IBM's core cloud unit, SoftLayer, does this and claims bare metal in the cloud works well for a variety of workloads.
One hidden bottleneck to look for is the rate of data transfer between storage and the cloud server or between applications within the cloud. The speed of storage systems responding to calls for data and the speed of data transfers from one source to another will have a constant effect on how well a cloud server or set of servers can perform. "We can exercise the system and tell a lot just from what we can see from the end user perspective," Linthicum said. "We can see response times."
Microsoft's Windows Azure has been optimized to get the most performance out of a Microsoft software stack. Thus, Microsoft shops are likely to find Azure to be robust, easy to use and best for performing Microsoft-based tasks such as running .Net programs, Linthicum said. Many modern programs, on the other hand, may want to incorporate non-Microsoft languages and make frequent use of open source software. However, there would be fewer guarantees they would run as efficiently -- or at all -- in a supported fashion.
Does it matter that one cloud is better than another at running a certain software stack? Soon enterprises will so engaged in the cloud they'll be paying Amazon $100,000 a month for cloud service. If one software stack runs twice as fast as another in the EC2 environment, that's a potential savings of $50,000 a month.
Architecture matters. On Oct.2, David Linthicum will help explain how to get it right in the cloud.
Multicloud Infrastructure & Application ManagementEnterprise cloud adoption has evolved to the point where hybrid public/private cloud designs and use of multiple providers is common. Who among us has mastered provisioning resources in different clouds; allocating the right resources to each application; assigning applications to the "best" cloud provider based on performance or reliability requirements.
. We've got a management crisis right now, and we've also got an engagement crisis. Could the two be linked? Tune in for the next installment of IT Life Radio, Wednesday May 20th at 3PM ET to find out.