InformationWeek: The Business Value of Technology

InformationWeek: The Business Value of Technology
InformationWeek Big Data Coverage
= Member Content
Facebook Twitter Share

E-mail | Print | Permalink | LinkedIn | RSS

Virtualization And The Case For Universal Grid Architectures


Today, virtualization can make one system act like 10, but making 10 physical boxes look and act like one is far more valuable. And that is where we are heading.




Essentially since the beginning of the industrial computing era, systems have been designed in a monolithic fashion--that is, they are effectively self-contained compute, memory, and I/O systems in a box. While such boxes can stretch out (internally and/or externally), and the capabilities of all the elements and connections have ridden the advancements-in-technology wave, they fundamentally continue to operate the same way. For instance, data lives on some sort of storage device (which itself is, of course, a system that contains processing), and is fed into memory where it is processed, and then the pattern is reversed.

Thus, in order to accommodate more of anything--more users on the system, more data to process, more transactions, faster processing, etc., the industry has responded by constantly developing bigger, faster, more capable systems ... systems that continue to remain largely monolithic.

In the meantime, as IT systems became more and more critical to the operation of various business functions, secondary--or redundant (highly available)--systems were required. This introduced the era of clustering, which is where one monolithic system can take over for a second monolithic system in the event that the first system fails. Clusters have grown in sophistication and size (as have their monolithic components), but remain comparatively small and confined when compared to the alternative approach: grid.

Moore's Law has meant we have been able to effectively double our capabilities (processing and capacity anyway, not actual I/O) roughly every 18 months which has, by and large, been able to keep up with the lion's share of overall demand from the commercial computing buying community.

Until now, that is.

Monolithic architectures, whether clustered or standalone, have historically been finite and static. This means that, in order to execute an application on a system, you have to run that application on that system. The overall system is configured with an operating system (overall stack controller), and applications beneath that OS. Those applications execute under rigid, specific conditions that are directly related to that OS, and that stack of infrastructure.

Clustering in that situation is normally relegated to simply having System A take over the application workload of System B, if/when System B goes down, for whatever reason. There are many variations and subtly different ways this happens, but basically that's it. Sometimes we have more than a 1:1 cluster relationship--sometimes we can have 4:1 or even 8:1, etc. But we never have 1,000: 1 or more. As long as the IT world has been comfortable knowing that an application could only execute under those physical parameters, clustering has been fine.

But now, virtualization changes all of that.

Server virtualization has allowed IT departments to make one physical stack of hardware appear to the OS/application environments as many individual stacks, allowing much better hardware utilization, efficiency, etc. That's great.

Building an N-node cluster of individual hardware stacks with high availability is great, and enables much improved operating efficiency because often users can eliminate many of their previous smaller stacks of equipment and push all their application environments onto virtual machines on far less equipment. But applications reap no more benefit--and indeed can often lose benefit--by doing this. Users save on hardware and operations, but their applications do not perform any better, or have any better availability or scalability on virtual hardware, than they would on their own dedicated hardware.

This is reality. It does not make it bad, it simply is what it is.

As great as virtualization 1.0 is, and as difficult as the problems it creates are, we truly are at the easy phase. Things are going to get much more difficult. Fundamentally, you might say that we're in 1972. Back then, the mainframe of the day was a single big box with a ton of resources in it that allowed us to create virtual machine instances where we carved out some of those resources and dedicated them to a specific virtual machine. If one VM/application environment needed more of anything, we could give it what was needed, presuming of course that we had more to give. This is essentially the same as what we do today; except that back then we could actually even do it with I/O to some degree.

The situation can be summarized thus: Being able to make one physical box look like 10 is very interesting and compelling, but making 10 physical boxes look and act like one is far more valuable. And that is where we are heading.

Today, if you run out of processing capability on your VM, you can either give it more cores within your physical machine (if you have them), or move the VM to a bigger, more powerful physical machine and let it run on those cores. Ironically, this is the very definition of monolithic computing. And yet, tomorrow you will have the ability to distribute--or federate--your application processing across cores, across systems, and across boxes as you need (and also to shrink back accordingly), all without an application knowing or caring.

Mark Peters is a Senior Analyst at the Enterprise Strategy Group, a leading independent authority on enterprise storage, analytics, and a range of other business technology interests.


Federal agencies must eliminate 800 data centers over the next five years. Find how they plan to do it in the new all-digital issue of InformationWeek Government. Download it now (registration required).


 1 | 2  | Next Page » 




InformationWeek encourages readers to engage in spirited, healthy debate, including taking us to task. However, InformationWeek moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. InformationWeek further reserves the right to disable the profile of any commenter participating in said activities.

Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.
Subscribe to RSS


Advertisement


InformationWeek Reports

report E-Discovery, Mobility and the Cloud
Do you know everywhere business documents reside? Storage pros are often tasked with aiding discovery, yet as IT increasingly relies on cloud repositories while employees substitute mobile devices for PCs, that question is getting much harder to answer. Problem is, in the event of litigation, courts won’t accept 'the cloud ate my homework' as an excuse. Here's how to cope.

report The Cloud's Role in BC/DR
Cloud services can play a role in any BC/DR plan. Yet just 23% of 414 business technology pros responding to our 2011 Business Continuity/Disaster Recovery Survey use services as part of their application and data resiliency strategies, even though half (correctly) say it would reduce overall recovery times. Here's how the combination of cloud backup and IaaS offerings can be a beneficial part of a "DR 2.0" plan.