InformationWeek

Virtualization And The Case For Universal Grid Architectures

Today, virtualization can make one system act like 10, but making 10 physical boxes look and act like one is far more valuable. And that is where we are heading.

By Mark Peters,  InformationWeek
October 17, 2012
URL: http://www.informationweek.com/tech-center/storage-virtualization/virtualization-and-the-case-for-universa/240009240

Essentially since the beginning of the industrial computing era, systems have been designed in a monolithic fashion--that is, they are effectively self-contained compute, memory, and I/O systems in a box. While such boxes can stretch out (internally and/or externally), and the capabilities of all the elements and connections have ridden the advancements-in-technology wave, they fundamentally continue to operate the same way. For instance, data lives on some sort of storage device (which itself is, of course, a system that contains processing), and is fed into memory where it is processed, and then the pattern is reversed.

Thus, in order to accommodate more of anything--more users on the system, more data to process, more transactions, faster processing, etc., the industry has responded by constantly developing bigger, faster, more capable systems ... systems that continue to remain largely monolithic.

In the meantime, as IT systems became more and more critical to the operation of various business functions, secondary--or redundant (highly available)--systems were required. This introduced the era of clustering, which is where one monolithic system can take over for a second monolithic system in the event that the first system fails. Clusters have grown in sophistication and size (as have their monolithic components), but remain comparatively small and confined when compared to the alternative approach: grid.

Moore's Law has meant we have been able to effectively double our capabilities (processing and capacity anyway, not actual I/O) roughly every 18 months which has, by and large, been able to keep up with the lion's share of overall demand from the commercial computing buying community.

Until now, that is.

Monolithic architectures, whether clustered or standalone, have historically been finite and static. This means that, in order to execute an application on a system, you have to run that application on that system. The overall system is configured with an operating system (overall stack controller), and applications beneath that OS. Those applications execute under rigid, specific conditions that are directly related to that OS, and that stack of infrastructure.

Clustering in that situation is normally relegated to simply having System A take over the application workload of System B, if/when System B goes down, for whatever reason. There are many variations and subtly different ways this happens, but basically that's it. Sometimes we have more than a 1:1 cluster relationship--sometimes we can have 4:1 or even 8:1, etc. But we never have 1,000: 1 or more. As long as the IT world has been comfortable knowing that an application could only execute under those physical parameters, clustering has been fine.

But now, virtualization changes all of that.

Server virtualization has allowed IT departments to make one physical stack of hardware appear to the OS/application environments as many individual stacks, allowing much better hardware utilization, efficiency, etc. That's great.

Building an N-node cluster of individual hardware stacks with high availability is great, and enables much improved operating efficiency because often users can eliminate many of their previous smaller stacks of equipment and push all their application environments onto virtual machines on far less equipment. But applications reap no more benefit--and indeed can often lose benefit--by doing this. Users save on hardware and operations, but their applications do not perform any better, or have any better availability or scalability on virtual hardware, than they would on their own dedicated hardware.

This is reality. It does not make it bad, it simply is what it is.

As great as virtualization 1.0 is, and as difficult as the problems it creates are, we truly are at the easy phase. Things are going to get much more difficult. Fundamentally, you might say that we're in 1972. Back then, the mainframe of the day was a single big box with a ton of resources in it that allowed us to create virtual machine instances where we carved out some of those resources and dedicated them to a specific virtual machine. If one VM/application environment needed more of anything, we could give it what was needed, presuming of course that we had more to give. This is essentially the same as what we do today; except that back then we could actually even do it with I/O to some degree.

The situation can be summarized thus: Being able to make one physical box look like 10 is very interesting and compelling, but making 10 physical boxes look and act like one is far more valuable. And that is where we are heading.

Today, if you run out of processing capability on your VM, you can either give it more cores within your physical machine (if you have them), or move the VM to a bigger, more powerful physical machine and let it run on those cores. Ironically, this is the very definition of monolithic computing. And yet, tomorrow you will have the ability to distribute--or federate--your application processing across cores, across systems, and across boxes as you need (and also to shrink back accordingly), all without an application knowing or caring.In short, we will eventually live in a world where physical boxes represent nothing other than containers that carry valuable resources, with all the resources in a data center (and even conceivably beyond) pooled, merged, and utilized for as long as required, and then relinquished back to the pool from which they came.

We'll have a pool of processing capabilities, memory, caches, and I/O from which VM and application requirements will carve out their requirements for the job at hand, and then those will disappear back into the pool until they are needed again.

This is not at all as far-fetched as it might seem. While not yet entirely automated, we have examples where this already works. High performance computing (HPC) environments have existed for years doing exactly this. In HPC, a single job or application is massively parallelized to execute small pieces across thousands of individual physical servers, performing a task thousands of times faster than if executing serially on a single processor. To the application, it's one machine: one really big machine with a ton of cores.

So what this tells us is that, if you want to span an application executable across physical nodes to process, you don't use a cluster. You use a grid.

But then, guess what the bottleneck is most of the time in HPC environments? I/O. Because, while the compute side may be grid, the storage side normally is a big, fast, fat, shared, monolithic storage instance. So, guess what has to change? Storage is the final frontier. We adopted storage clustering soon after server clustering and never really looked back. Today it seems as if 99% of all networked storage arrays are monolithic, two-controller (that is, clustered) boxes. When you run out of stuff in one box, you bring in another; maybe even cluster those together.

Yes, there are storage arrays that can support more than two-controller clusters today, but few. And even those tend to just be larger clusters--four pairs of two-controller clusters, for example. They are--essentially--still monolithic.

On the other hand, a grid is a federation of resources, unconstrained by traditional architectures. In the grid computing/HPC example, 1,000 servers with 1,000 network connections being squeezed down through two (or eight or 16 or 32) storage controllers only to then connect to 1,000 disk drives makes no sense. Why aren't there 1,000-disk controllers--virtual or otherwise? Eventually there will be. Just as users are restricted to their weakest physical link in virtual environments today, so it will be tomorrow ... unless something different is done.

Today's Computing Architectures

This diagram is what perhaps 95% of the commercial computing world looks like. Sure there are way more RAID controllers and switches, but the unit of measurement is pretty accurate: there's a lot on the top, a lot on the bottom, and not much in the middle.

It is important to realize that we are really in the first inning of the game. Sure, it's a new, big game, but look what's happened to our lives since 1972. IBM owned commercial computing, but that didn't stop the industry from constantly re-inventing itself and creating outrageous opportunity and wealth along the way. If VMware is the equivalent of IBM in 1972, which vendors will become the next EMC, Oracle, NetApp, etc.? (And yes, that's like saying who will be the next Digital, Wang, or Prime.)

We've pretty much been doing variations on the same architectural theme (by continuing to develop monolithic implementations of infrastructure) for over 50 years, and historically no significant trend lasts much longer. The time is ripe for an upheaval.


Copyright © 2009 United Business Media LLC, All rights reserved.