Physical Vs. Virtual: Oracle, Others Redefine Appliances
While IT rushes to virtualize applications, high-end databases are increasingly moving against the tide, from software to hardware.
Physical appliances are fighting to hold their ground in the enterprise data center. The strength of virtualization seems unassailable: By abstracting processes from the underlying infrastructure, applications can be customized to fit user needs and business demands, rather than having to conform to hardware. Freed from physical constraints, applications can dynamically run wherever is most efficient. Technologies from firewalls to WAN optimizers to network management systems to routers that used to live on dedicated boxes are moving to virtual servers.
The result is what every business is looking for: enormous flexibility and economies of scale. Why not go virtual whenever and wherever possible?
Because there are still cases where physical appliances pay off, thanks to specialization and customization. Without the overhead of virtualization or superfluous software processes, a dedicated piece of hardware will almost always perform a given task faster than software. And hardware that's custom-designed for a single purpose--be it running the Oracle 11g database or examining packets for malicious content at wire speed--usually delivers the highest performance, albeit at a cost of less flexibility and a slower development cycle.
Like clothes dryers and air conditioners, data center appliances are optimized to do one thing very well. Most aren't truly plug and play, but some come pretty close to the ideal of a black box whose inner workings IT needn't worry about.
At the ends of the spectrum, we're not wrestling with this decision. If an unvirtualized system uses just a fraction of the available CPU and I/O power, or if a workload needs to be moved across data centers or the public cloud, virtualization is an easy choice. If the task at hand requires specialized hardware purpose-built to one function, as for high-speed, deep-packet inspection, very-high-performance routing, or large-scale OLTP or OLAP processing, purpose-built appliances win hands down.
It's the middle ground where we're struggling. Many companies take the approach that if a system isn't broken, IT shouldn't spend time and money to fix it. Then there's the law of unintended consequences. Virtualizing components of an application stack can introduce variables that affect services in unexpected ways. Meanwhile, performance management systems and some security controls will no longer work exactly as they once did. Just getting back to a stable and manageable state may imply some serious work for already overtaxed staffs.
Emblematic of this difficult decision is the database portion of the application stack. Not long ago, production databases were strictly off the virtualization track because of their tendency to gobble I/O and processing power. But recent hardware and software improvements have made virtualization a far more viable option for the database management system. The questions now: Which databases, when, why, and how?
Many IT organizations still deploy refrigerator-size, self-contained DBMS servers. Abandoning them in favor of virtualized servers on high-speed LANs using networked storage implies changes to everything from how IT teams are structured to policies around how the systems get funded and maintained. That's why, when it comes to database management systems, vendors and IT architects alike are pushing back against the "virtualize everything" movement. Their reasons shed light on the wisdom of going all virtual, all the time.