Switched PCIe: Better Bandwidth?

NextIO's PCIe-based technology removes switching from the blade to improve processing efficiency, increase bandwidth and offer greater flexibility.

September 20, 2006

8 Min Read
Network Computing logo

 

 



The serial-based PCIe architecture offers opportunities for extending peripheral connections beyond the blade, and NextIO believes its switched PCIe midplane for blade systems will offer increased bandwidth and greater flexibility that will translate into substantially decreased I/O cost and complexity at the blade level.

NextIO has partnered with Dell and Fujitsu-Siemens as well as I/O companies LSI Logic and Neterion. It is also working with Denali Software for high-quality verification of shared I/O designs. The PCI-SIG, with more than 900 member companies, is the unincorporated governing body of the PCI standard.



NextIO's approach to solving the bandwidth problems presented by high-density blade servers is unique, but much of its value is dependent on the development of IOV-enabled I/O devices for storage and network connectivity. The fact that NextIO's solution can be adapted to existing system designs and doesn't require specialized software support weighs heavily in its favor. It remains to be seen if NextIO's solution will be adopted by the major players.

Blade servers can pack the power of dual-core processors into a tiny space, but those same servers could find themselves unable to fully use the emerging quad-core processors because of power, cost and I/O limitations in the hardware.

Enter NextIO. The company's PCI Express, or PCIe, switching solution will remove those I/O limitations and let blade-server vendors deliver bandwidth to densely packed servers based on quad-core processors.

NextIO isn't the first to deliver a switching architecture for blade systems; plenty of other storage- and network-specific switching architectures exist, but it is the first to put a switch in the midplane, moving I/O off the blade and to the backplane, and thereby increasing performance, reducing blade complexity and leveraging third-party devices. A standards group is starting to define the spec for multiple virtual I/O devices on a single physical device, but it probably won't be defined until the end of this year.That said, a number of challenges remain for NextIO. The company relies heavily on an emerging technology called I/O Virtualization (IOV)--with few vendors and no standards. Additionally, NextIO only came out of stealth mode in May and, without some strong patents, the company will find it difficult to hang onto the niche it's working to establish.


Blade Bypass

Click to enlarge in another window

Architectural Concerns

The processing capabilities of inexpensive x86 microprocessors have moved well beyond incremental speed advancements of the past, and today's dual-core offerings will transition to quad-core within 12 months. The difficulty for future blade systems will be to provide enough I/O to individual server blades with eight, or even 16, processor cores to keep them busy.

But bandwidth is only part of the problem. On most blade systems, Ethernet adapters and Fibre Channel host bus adapters are installed on each blade. As a result, multiple fabrics must then be switched and routed from each blade--through the chassis and to the backplane--only to be switched and routed again downstream. This design is bandwidth-limited by the power and real estate available on blades, adds to their cost/complexity and offers nothing in the way of I/O redundancy for individual blades.The 2003 debut of the serial-based PCIe interface introduced substantial changes to the conventional PCI interface. PCIe offers significantly increased bandwidth and eliminates the electrical limitations imposed by PCI and PCI-X's parallel bus architecture. High-performance I/O interfaces such as 10 Gigabit Ethernet, InfiniBand and Serial Attached SCSI that overload a legacy PCI-X/133 bus can be easily handled by PCIe, making it excellent for feeding bandwidth to data-hungry server blades.

Midplane Model

Privately owned NextIO has been working on its PCIe switching solution since the company's inception in 2003. The concept is simple: A midplane-switched PCIe is more efficient for transferring IO from the blade to the backplane, and the best place for I/O adapters is at the backplane rather than on the blade. NextIO believes its approach can reduce I/O costs by more than 50 percent, offer support for a greater number of I/O options per chassis and simplify the adoption of devices that support hardware-based IOV in the future.

PCIe was designed as a switched architecture. Because it is a point-to-point serial protocol rather than a shared bus, PCIe must be bridged and switched at the motherboard level to function. NextIO's approach focuses on aggregating PCIe links from multiple blades and uses extremely low-latency switching to provide shared PCIe connectivity to independent I/O modules. The company's switching module is fully supported by existing technology so that no software and little hardware modification is required in a conventional blade chassis design. A proof of concept was demonstrated at WinHEC this year, using a combination of unmodified servers from Dell and Fujitsu-Siemens, NextIO's PCIe switching module and IOV-capable cards from LSI Logic and Neterion.

NextIO is quick to point out that its switched PCIe solution doesn't compete with InfiniBand, Fibre Channel or Ethernet. Its focus is on more efficient use of these fabrics by simplifying the shared access to any I/O device from any blade on the system. The next big trend in I/O is hardware-level virtualization, and the bandwidth and expansion capabilities of the basic PCIe interface offer substantial flexibility for future virtualization apps.NextIO doesn't anticipate product availability of its PCIe switching module until the second half of 2007, and it's expected that the IOV-enabled devices that add substantially to NextIO's value proposition will also be scarce in the same time frame. This leaves an opening for 10 Gigabit Ethernet to find a home on server blades, and companies such as Broadcom, Chelsio, Intel and NetXen have silicon that could fill the performance gap, provided the costs on 10 Gigabit drop to reasonable levels.

Getting The Bus Out Of The Box

NextIO is the only company specifically focused on a midplane solution for blade servers, but the Advanced Switching Interconnect SIG is working to incorporate PCIe as the backplane fabric for interchassis apps. The group says the design would enable a scalable and extensible backplane interconnect, supporting advanced capabilities, such as tunneling, multicast, queuing, data streaming and management packets for QoS (quality of service) that go way beyond standard PCIe features. Storage vendor Xyratex, an ASI-SIG member, has been focused on the potential for PCIe and working with StarGen to develop externalized PCIe as a high-performance storage interface. This is different from NextIO's approach and is targeted to compete with other high-performance interconnects, such as Serial Attached SCSI, Fibre Channel and InfiniBand.

NextIO has partnered with server vendors Dell and Fujitsu-Siemens, as well as I/O companies LSI Logic and Neterion. NextIO also is working with Denali Software for high-quality verification of shared I/O designs.NextIO's switched PCIe is getting a lot of attention from resellers, but the value proposition of the technology is heavily dependent on the availability of IOV-enabled devices. Although it's likely that many enterprise-level I/O vendors are actively working on IOV devices, only a few companies--such as Neterion and LSI Logic--are actually showing them. Problematically for device vendors, when IOV devices are in use in the enterprise, there will be no need to put expensive 10 Gigabit Ethernet adapters on each blade to provide 10 Gigabit Ethernet connectivity to the system, since IOV-enabled 10 Gigabit Ethernet modules on the backplane could support the needs of the entire chassis. Adding to IOV's difficulties is the fact that the PCI-SIG isn't even expected to produce a working standard until the end of this year.

There's evidence that NextIO's approach is sound. Sun's new Sun Blade 8000 Modular System features a pass-through midplane architecture that offers two x4- and four x8-configuration PCIe backplane links per server module, for a total of 160 Gbps of aggregate bandwidth per blade. This is not a switched solution, but it certainly illustrates the performance capabilities of PCIe, the viability of carrying it through the chassis midplane and the incorporation of modular I/O at the backplane.

It seems obvious that a switched PCIe midplane could bring greater bandwidth plus hardware cost savings to blade servers through the effective use of virtualization technology. But it's hard to say if this is a position NextIO can hang on to for the time it takes the market to develop: Other PCIe companies could develop competitive solutions. The company has so far been unwilling--and reasonably so--to share specifics on the technology that could illuminate what the company has that would prevent other PCIe switch companies from following its lead.

Steven Hill is an NWC technology editor. Write to him at [email protected].

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights