Virtualization And The Case For Universal Grid Architectures
By Mark Peters
InformationWeek
Thus, in order to accommodate more of anything--more users on the system, more data to process, more transactions, faster processing, etc., the industry has responded by constantly developing bigger, faster, more capable systems ... systems that continue to remain largely monolithic.
In the meantime, as IT systems became more and more critical to the operation of various business functions, secondary--or redundant (highly available)--systems were required. This introduced the era of clustering, which is where one monolithic system can take over for a second monolithic system in the event that the first system fails. Clusters have grown in sophistication and size (as have their monolithic components), but remain comparatively small and confined when compared to the alternative approach: grid.
Moore's Law has meant we have been able to effectively double our capabilities (processing and capacity anyway, not actual I/O) roughly every 18 months which has, by and large, been able to keep up with the lion's share of overall demand from the commercial computing buying community.
Until now, that is.
Monolithic architectures, whether clustered or standalone, have historically been finite and static. This means that, in order to execute an application on a system, you have to run that application on that system. The overall system is configured with an operating system (overall stack controller), and applications beneath that OS. Those applications execute under rigid, specific conditions that are directly related to that OS, and that stack of infrastructure.
Clustering in that situation is normally relegated to simply having System A take over the application workload of System B, if/when System B goes down, for whatever reason. There are many variations and subtly different ways this happens, but basically that's it. Sometimes we have more than a 1:1 cluster relationship--sometimes we can have 4:1 or even 8:1, etc. But we never have 1,000: 1 or more. As long as the IT world has been comfortable knowing that an application could only execute under those physical parameters, clustering has been fine.
But now, virtualization changes all of that.
Server virtualization has allowed IT departments to make one physical stack of hardware appear to the OS/application environments as many individual stacks, allowing much better hardware utilization, efficiency, etc. That's great.
Building an N-node cluster of individual hardware stacks with high availability is great, and enables much improved operating efficiency because often users can eliminate many of their previous smaller stacks of equipment and push all their application environments onto virtual machines on far less equipment. But applications reap no more benefit--and indeed can often lose benefit--by doing this. Users save on hardware and operations, but their applications do not perform any better, or have any better availability or scalability on virtual hardware, than they would on their own dedicated hardware.
This is reality. It does not make it bad, it simply is what it is.
As great as virtualization 1.0 is, and as difficult as the problems it creates are, we truly are at the easy phase. Things are going to get much more difficult. Fundamentally, you might say that we're in 1972. Back then, the mainframe of the day was a single big box with a ton of resources in it that allowed us to create virtual machine instances where we carved out some of those resources and dedicated them to a specific virtual machine. If one VM/application environment needed more of anything, we could give it what was needed, presuming of course that we had more to give. This is essentially the same as what we do today; except that back then we could actually even do it with I/O to some degree.
The situation can be summarized thus: Being able to make one physical box look like 10 is very interesting and compelling, but making 10 physical boxes look and act like one is far more valuable. And that is where we are heading.
Today, if you run out of processing capability on your VM, you can either give it more cores within your physical machine (if you have them), or move the VM to a bigger, more powerful physical machine and let it run on those cores. Ironically, this is the very definition of monolithic computing. And yet, tomorrow you will have the ability to distribute--or federate--your application processing across cores, across systems, and across boxes as you need (and also to shrink back accordingly), all without an application knowing or caring. Mark Peters is a Senior Analyst at the Enterprise Strategy Group, a leading independent authority on enterprise storage, analytics, and a range of other business technology interests.
Federal agencies must eliminate 800 data centers over the next five years. Find how they plan to do it in the new all-digital issue of InformationWeek Government. Download it now (registration required).
1 | 2 | Next Page »
| To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy. |
Virtual Infrastructure Reports
Informed CIO: VDI Snake Oil Check
You won't lose your shirt on a desktop virtualization initiative, but don't expect it to be simple to build or free of complications. This report examines the three biggest problems when developing a business case for VDI: storage costs, ongoing licensing, and the wisdom of prolonging the investment in PC infrastructure.
Fundamentals: Next-Generation VM Security
Server virtualization creates new security threats while turning the hypervisor into a network black hole, hiding traffic from traditional hardware defenses -- problems a new breed of virtualization-aware security software tackles head-on.
Delegation Delivers Virtualization Savings
IT can't-and shouldn't-maintain absolute control over highly virtualized infrastructures. Instituting a smart role-based control strategy to decentralize management can empower business units to prioritize their own data assets while freeing IT to focus on the next big project.
The Zen of Virtual Maintenance
Server virtualization has many advantages, but it can also lead to chaos. Many organizations have unused or test VMs running on production systems that consume memory, disk and power. This means critical resources may not be available in an emergency: say, when VMs on a failed machine try to move to another server. This can contribute to unplanned downtime and raise maintenance costs. Easy deployment also means business units may come knocking with more demands for applications and services. This report offers five steps to help IT get a handle on their virtual infrastructure.
Pervasive Virtualization: Time to Expand the Paradigm
Extending core virtualization concepts to storage, networking, I/O and application delivery is changing the face of the modern data center. In this Fundamentals report, we'll discuss all these areas in the context of four main precepts of virtualization.
Virtually Protected: Key Steps To Safeguarding Your VM Disk Files
We provide best practices for backing up VM disk files and building a resilient infrastructure that can tolerate hardware and software failures. After all, what's the point of constructing a virtualized infrastructure without a plan to keep systems up and running in case of a glitch--or outright disaster.



Subscribe to RSS