Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.
March 1, 2013
4 Min Read
InformationWeek Green - Mar. 4, 2013
Download InformationWeek March 2013 special issue on software-defined networks, distributed in an all-digital format (registration required).
Collision CourseThe fusion of two transformative technologies -- hypervisor management and software-defined networking -- is creating both new alliances and competitive tensions.
It's also forcing IT teams looking to add SDN to their infrastructures to make some tough choices.
The first task: Understand the two prevailing SDN philosophies. For members of the Open Networking Foundation, keeper of the OpenFlow specification, it's all about replacing overpriced switches and their proprietary management and control software with commodity hardware built from merchant silicon and under the direction of centralized controllers running on virtual servers.
"SDN is a market correction," says Stuart Bailey, founder and CTO of network management vendor Infoblox. "It's a huge shift in value from hardware to software."
No surprise, this position is repudiated by big network infrastructure vendors. Juniper Networks co-founder and CTO Pradeep Sindhu says the notion that SDN will turn networks into a pile of Lego-like commodity components misunderstands SDN's real benefits, namely automation and agility, which ultimately deliver lower operational costs. In other words, it's not just about capex. "There will still be rich functionality in network elements," Sindhu says. "It's not just going to be a big controller in the sky."
While there's merit in both viewpoints, neither articulates the most significant and promising benefit of SDN: erasing, not merely bridging, the gap between virtual networks and virtual servers.
The State We're In
Report CoverOur report on the intersection of SDN and server virtualization is free with registration. This report includes 20 pages of action-oriented analysis, packed with 10 charts.
What you'll find:
Details on Cisco's vision
Vendor's plans for SDN-like applications
Today's virtualized data center -- filled with hypervisors, creating soft NICs and Layer 2 switches that are in turn connected to legacy hardware -- has plenty of problems. While virtual and physical assets are well connected at the data layer, there's a disconnect when it comes to control. Protocols for managing configurations and policies were designed for hardware switches and routers; they're generally ignorant of virtual network resources. Although plenty of workarounds are on the table, from Edge Virtual Bridging to VXLAN and Cisco's Nexus 1000V, there's no standard way to fuse the network equipment control plane of flow tables and management interfaces with hypervisor-resident virtual switches and NICs.
There's also no standard way to integrate network services such as firewalls, load balancers and content filters into a network application bundle such that newly instantiated virtual applications can automatically inherit a set of network policies and services. Such is the promise of the expansive vision of SDN: It's more than just a way to route packets.
Still, our InformationWeek SDN Survey shows that overtaxed IT teams remain leery of jumping into SDN, though many respondents appreciate that it's more than just a way to optimize low-level network traffic flows. Thirty-five percent of respondents to our survey see it as useful for automated provisioning and management, and 31% peg SDN as a way to implement network policy. Yet success in either area means augmenting so-called southbound SDN technology such as OpenFlow, focused on Layer 2/3 traffic management, with northbound APIs and orchestration software.
To read the rest of the article,
download the InformationWeek March 2013 special issue on software-defined networks
About the Author(s)
Kurt Marko is an InformationWeek and Network Computing contributor and IT industry veteran, pursuing his passion for communications after a varied career that has spanned virtually the entire high-tech food chain from chips to systems. Upon graduating from Stanford University with a BS and MS in Electrical Engineering, Kurt spent several years as a semiconductor device physicist, doing process design, modeling and testing. He then joined AT&T Bell Laboratories as a memory chip designer and CAD and simulation developer.Moving to Hewlett-Packard, Kurt started in the laser printer R&D lab doing electrophotography development, for which he earned a patent, but his love of computers eventually led him to join HP’s nascent technical IT group. He spent 15 years as an IT engineer and was a lead architect for several enterprisewide infrastructure projects at HP, including the Windows domain infrastructure, remote access service, Exchange e-mail infrastructure and managed Web services.
You May Also Like