Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

HP Opens Age Of Converged Networks

The week's announcements from HP's annual Technology Forum brought the new world of converged data and storage networks into the mainstream.  HP's new blades and Proliant servers have Emulex silicon providing 10Gbps Ethernet and FCoE as standard equipment on the motherboard and in optional mezzanine cards. Since Qlogic supplies the cool new Virtual Connect 10Gb/24 port switch module both companies get to claim a design win.

 The Virtual Connect 10Gb/24 port switch module is cool because it only aggregates Ethernet traffic from the blades upstream to the core but is an FCoE switch in its own right. Even cooler Qlogic's Bullet ASIC allows four of the eight upstream ports have flex personality ports that can be either Ethernet or Fibre Channel on command. User organizations can dip their toes in the FCoE waters knowing they can add FC modules to the blade enclosure and use all the ports for Ethernet if things don't work out.

All these developments, with the exception of the Fibre Channel/10Gbps Ethernet dual personality ports, were expected. In no small part HP is now proving true the comment I made when Cisco's UCS was introduced that UCS was next year's blade servers this year.  HP's move to put 10Gbps and FCoE on the motherboard puts converged networking firmly in the mainstream. It's now up to Dell, IBM and the rest (SuperMicro, NEC, Fujitsu) to get on the bandwagon with 10Gbps LOM.

While this is a big win for Emulex, displacing Broadcom and Intel which have ruled the LOM market for years. The current version of this stuff only supports three virtual Ethernet cards (vNICs) and one virtual storage adapter (vHBA) per Ethernet channel, so a typical server with dual channel LOM has six vNICs and two  vHBAs. While this is right at VMware's best practice recommendation for today's virtualization hosts I can see it as being a bit constraining especially as VMDirectPath catches on.

I can't help but compare this new HP config to a UCS chassis that holds half as many blades each of which has just one dual port CNA. An HP chassis with two 10Gig/FC modules switches between blades, where the UCS sends all data upstream, and still has more uplinks. Plus, you can avoid the top of row switch and connect eight ports to end of row Ethernet and FC switches.

  • 1