Whenever we ask IT professionals about standards vs. proprietary, we get people professing their allegiance to network protocols. If my widget wants to communicate with your widget, they need to speak a common language, right?
The problem with standards is that they're the diametric opposite of innovation. As soon as you define a way in which a particular action must be taken, you eliminate other methods of taking that action. Sometimes support of a standard actually inhibits development of creative solutions to system problems.
Take OVF, the Open Virtualization Format for virtual machines. While almost all hypervisors are capable of doing basic functions with a standard .ovf file, VMware's proprietary--and, yes, innovative--VMDK file format brings a ton of additional features, like thin provisioning. However, a VM must be converted to a VMDK to take advantage. Yes, the OVF format remains useful for passing VMs from hypervisor to hypervisor in a standard, structured way. But the OVF format is also fixed, and because it's fixed, when VMware--or Citrix or Microsoft--wants to support a new feature that requires extension of the functionality at the virtual disk level, it can't do so in OVF. If it could, the standard would be broken and therefore useless. Even though OVF is extensible, if every vendor adds its own extensions to OVF, suddenly the thing is proprietary even though it's technically open. How will Hyper-V read VMware's proprietary OVF extensions? How will VMware read Hyper-V's? What if the extension carries critical machine data?
The kicker is, it's during periods of rapid change and intense innovation when we need standards the most. The move to private clouds and heavily virtualized and automated networks certainly qualifies as intense. But how does IT make the right bet in a world where technology is changing too fast to know what the "correct" standard will be a year from now? Maybe tomorrow some upstart will figure out a much better way to do X, and then standards are out the window, just like VMware tossed the first release of ESX Server.
While it's a challenging time for IT teams that need to integrate dissimilar systems to deliver tangible business benefit via integration, I can't help but feel bad for vendors. Cisco, Microsoft, VMware, and others catch heat for "breaking" standards, but the reality is, they face a delicate balancing act between providing enough of a standard to promote far-reaching interoperability without hamstringing innovation. And virtualization has not made this difficult task any easier.
"Let's virtualize the network," says Company A. Engineers get to work and figure out that they want to use InfiniBand as a carrier interconnect. Company B has the same goals but prefers 10-Gbps Ethernet. Company C thinks PCI Express' SR-IOV is going to make life easier, so it goes with PCI Extension. Who's right? Which of these interconnects is going to win? The answer isn't always clear, and in the multivariate world of pervasive virtualization, unclear outcomes have become the rule rather than the exception.
I believe IT teams need to worry less about standards, or lack thereof, and focus energy on application programming interfaces, or APIs, that define the way outside elements can communicate with a given infrastructure element. Just as virtualization provides a layer of abstraction between a virtual machine and the memory it uses, an API provides a layer of abstraction between the request being made and the way in which the device accomplishes that request. I tell VMware to clone a machine with an API call, and vSphere uses its proprietary cloning technology to physically copy the machine. I can ask both vSphere and Hyper-V to do the same logical action: clone the machine. The way in which those two infrastructures actually get the job done could be radically different; if I'm using the right SAN, for instance, VMware can use another API to leverage the power of the storage array to do the task faster, and that's fine. This is the power of the API--and the reason APIs are becoming a standard feature of virtualized infrastructures. Tear down a machine, put a machine up, restart, failover, give me a new network interface. All these actions can be taken via API calls to underlying infrastructure elements without stifling the vendor's ability to innovate.
For networking pros, the problem with APIs is that they're invoked programmatically. And while there are plenty of code jockeys around, this programming is happening in a new place: the core infrastructure. That's scary. Code is buggy, and yet suddenly, the infrastructure team needs to know how to program, at least if your organization wants to leverage the power of APIs to provide concrete benefits such as automation, orchestration, and self-repair. Can you hire programmers to augment your infrastructure team skill set? Certainly. Do those same programmers understand the infrastructure well enough to work efficiently with the existing team? Maybe, but probably not. Time to start cross training? It is undoubtedly so.