In short, we will eventually live in a world where physical boxes represent nothing other than containers that carry valuable resources, with all the resources in a data center (and even conceivably beyond) pooled, merged, and utilized for as long as required, and then relinquished back to the pool from which they came.
We'll have a pool of processing capabilities, memory, caches, and I/O from which VM and application requirements will carve out their requirements for the job at hand, and then those will disappear back into the pool until they are needed again.
This is not at all as far-fetched as it might seem. While not yet entirely automated, we have examples where this already works. High performance computing (HPC) environments have existed for years doing exactly this. In HPC, a single job or application is massively parallelized to execute small pieces across thousands of individual physical servers, performing a task thousands of times faster than if executing serially on a single processor. To the application, it's one machine: one really big machine with a ton of cores.
So what this tells us is that, if you want to span an application executable across physical nodes to process, you don't use a cluster. You use a grid.
But then, guess what the bottleneck is most of the time in HPC environments? I/O. Because, while the compute side may be grid, the storage side normally is a big, fast, fat, shared, monolithic storage instance. So, guess what has to change? Storage is the final frontier. We adopted storage clustering soon after server clustering and never really looked back. Today it seems as if 99% of all networked storage arrays are monolithic, two-controller (that is, clustered) boxes. When you run out of stuff in one box, you bring in another; maybe even cluster those together.
Yes, there are storage arrays that can support more than two-controller clusters today, but few. And even those tend to just be larger clusters--four pairs of two-controller clusters, for example. They are--essentially--still monolithic.
On the other hand, a grid is a federation of resources, unconstrained by traditional architectures. In the grid computing/HPC example, 1,000 servers with 1,000 network connections being squeezed down through two (or eight or 16 or 32) storage controllers only to then connect to 1,000 disk drives makes no sense. Why aren't there 1,000-disk controllers--virtual or otherwise? Eventually there will be. Just as users are restricted to their weakest physical link in virtual environments today, so it will be tomorrow ... unless something different is done.
This diagram is what perhaps 95% of the commercial computing world looks like. Sure there are way more RAID controllers and switches, but the unit of measurement is pretty accurate: there's a lot on the top, a lot on the bottom, and not much in the middle.
It is important to realize that we are really in the first inning of the game. Sure, it's a new, big game, but look what's happened to our lives since 1972. IBM owned commercial computing, but that didn't stop the industry from constantly re-inventing itself and creating outrageous opportunity and wealth along the way. If VMware is the equivalent of IBM in 1972, which vendors will become the next EMC, Oracle, NetApp, etc.? (And yes, that's like saying who will be the next Digital, Wang, or Prime.)
We've pretty much been doing variations on the same architectural theme (by continuing to develop monolithic implementations of infrastructure) for over 50 years, and historically no significant trend lasts much longer. The time is ripe for an upheaval.