Cisco Pitches Virtual Switches For Next-Gen Data Centers

Company sees new virtualization switch as heart of its Data Center 3.0 architecture.

Andy Dornan, Contributor

January 24, 2008

7 Min Read

Cisco systems this week starts pitching a new switch, called Nexus, as the first component of its Data Center 3.0 architecture and as the successor to the Catalyst 6500, the most successful product in Cisco's (or just about any company's) history. Like the Catalyst 6500, the Nexus is a chassis intended for the enterprise data center, into which customers stack blades for additional interfaces. But whereas the Catalyst 6500 is a jack-of-all-trades that can be a firewall, a load balancer, or a router depending on the blades plugged into it, the Nexus is aimed at just one job: virtualization. InformationWeek Reports

Cisco's vision is one in which big companies off-load an increasing number of server tasks to network switches, with servers ultimately becoming little more than virtual machines inside a switch. The Nexus doesn't deliver that, but it makes a start, aiming to virtualize the network interface cards, host bus adapters, and cables that connect servers to networks and remote storage. At present, those require dedicated local area networks and storage area networks, with each using a separate network interface card and host bus adapter for every virtual server. The Nexus aims to consolidate them all into one (or two, for redundancy), with virtual servers connecting through virtual NICs.

Cisco's interest in expanding the network's scope is obvious--the Catalyst 6500 platform alone has generated more than $20 billion in revenue over its lifetime. But Cisco isn't the only one moving toward virtual I/O. Brocade last week introduced the DCX Backbone, a switch that aims to do much the same as Cisco's Nexus: consolidate SAN and LAN into a single network, and virtualize the NICs that connect them to virtual servers. But the two companies take a different approach at the physical layer, a function of their different roots.

As a router company, Cisco bases its networks on Ethernet: Virtual servers may see a virtual Fibre Channel SAN, but really they're using an Ethernet cable that's shared with other network traffic. The Nexus can still use Fibre Channel, but only for connections to legacy storage targets such as disk drives, and only because disks have a slower replacement cycle than servers, so older systems stay in use longer. Conversely, storage company Brocade uses Fibre Channel for the physical connection to servers, running virtual Ethernet to virtual servers. Brocade expects that it eventually will migrate to Ethernet, too, but that right now Fibre Channel is more reliable.

diagram: Virtual Approach

NOT SO FAST
Customers may want to wait before investing in either approach. The Cisco and Brocade product lines are immature. Brocade's currently is just a storage switch, with the links to Ethernet networks due later this year. Cisco says that it can connect to both Ethernet and Fibre Channel immediately, but not yet at the promised maximum capacity. Cisco says that modules eventually will be available to support 40-Gbps and 100-Gbps Ethernet, though when will depend on standardization efforts by the IEEE. Currently, both companies' boxes are limited to 10 Gbps per interface.

It's also unclear how Cisco plans to use the technology from Nuova Systems, an I/O virtualization startup in which it acquired an 80% stake last August. Since then, Nuova has revealed very little, other than that its products involve Fibre Channel over Ethernet--very similar to the functionality in Cisco's Nexus. But according to Cisco, there's nothing developed by Nuova in the Nexus.

Cisco's previous acquisition strategy has been to add other companies' technology to the Catalyst 6500, usually by converting a standalone appliance into a Catalyst blade. It won't be doing the same with the Nexus, mostly because 100 Gbps is just too fast for wire-speed processing. According to Cisco, the main issue is thermal, as the switch has no way to get rid of the heat that application accelerators or firewalls would generate at such high speeds.

Cisco and Brocade aren't the only vendors offering I/O virtualization. The first to ship a product was Xsigo Systems, a startup that sells a dedicated appliance for converting virtual Ethernet and Fibre Channel into the real thing. Another startup, 3Leaf Systems, says it can do the same thing using a dedicated server instead of an appliance. Both run their virtual networks over InfiniBand, which performs even better than Fibre Channel. Cisco and Brocade say they may support InfiniBand in the future, but only if customers demand it.

MEMORY AREA NETWORKS
When Cisco announced its Data Center 3.0 strategy in July, its most far-fetched prediction seemed to be that networks eventually will connect CPUs to remote memory banks, not just to remote storage or traditional LANs. The theory is that just as printers and disk drives moved from local devices to network resources, so will all other components. That would mean the end of servers as we know them.

There's no sign of such a revolution in the Nexus or Brocade's DCX, although both vendors do see high-performance computing and the network traffic from grid computing clusters as one important use for a virtual server, and the thing most likely to make them add InfiniBand support. Cisco believes that Ethernet is sufficient for clusters involved in applications such as video rendering, but InfiniBand's lower latency and overhead would be needed for real-time, event-driven applications such as split-second, algorithmic stock trading.

I/O Virtualization Products

All support Fibre Channel SANs and Ethernet LANs. They differ in physical networks and where virtual networks convert to real ones.

Product

Cisco Nexus

Brocade DCX

Xsigo Director

3Leaf Systems V-8000

3Leaf Systems has announced a product that's closest to Cisco's vision, in the form of a dedicated network for connecting servers at the CPU layer. However, its product also falls short of the virtual network that Brocade, Cisco, and 3Leaf all envision. The CPUs' connections to memory and to other CPUs (on multiprocessor machines) require low-latency, high-bandwidth links that don't go through a hypervisor or OS, which would entail a physically separate network.

3Leaf uses InfiniBand and a proprietary chip that sits on the CPU's data bus to interconnect CPUs and servers. Even InfiniBand's low latency and high bandwidth can't match the speed of the CPU's memory bus, so in addition to providing the InfiniBand connection, the chip caches data from the memory of other servers in the cluster. The hard part is knowing what to cache for particular applications, which is where most of 3Leaf's proprietary technology comes in.

The 3Leaf chip was designed to plug in to Advanced Micro Devices' Torrenza sockets, which give third-party components direct access to the CPU's data bus. Intel has a similar technology and has invested in 3Leaf to ensure support for that, too. 3Leaf expects prototypes by March and chips shipping by the end of the year. It has signed up server vendors including Sun Microsystems and Hewlett-Packard as partners, with hopes they'll build the chips into servers.

Meantime, Cisco isn't abandoning the Catalyst 6500, though it's now describing it as a service switch. The company last week unveiled upgrades to two of the Catalyst 6500 blades, boosting Power-over-Ethernet wattage so the wireless blade can support 802.11n access points, and giving its Wide Area Application Services a software client for accelerating WAN traffic to mobile devices. Cisco plans to keep adding service blades to the Catalyst, though it probably won't go beyond 10 Gbps. The higher speeds will be reserved for the Nexus at the data center core.

About the Author(s)

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights