PCI-E Offers Expansion and Interconnectivity

Bandwidth-challenged PCI will give way this year to a new standard, PCI Express, which requires no bridge between your expansion slots and motherboard chips.

May 7, 2004

6 Min Read
Network Computing logo

PCI has gone through several changes over the years, including 64-bit versions at both 33 MHz and 66 MHz. PCI-X increased the 64-bit parallel bus to 133 MHz, and is backward-compatible with previous PCI versions. Meanwhile, Gigabit Ethernet, 2-GB Fibre Channel and Ultra 320SCSI have pushed the expansion-slot envelope.

These days, parallel architectures come up short in real estate and total bus bandwidth. PCI uses a shared bus, which limits the total bandwidth regardless of the number of slots. Server manufacturers have been required to install multiple buses to circumvent this problem.

PCI-E, however, offers ample bandwidth to meet the growing demand (for more on the PCI-E spec, see ID# 1509rd2). Taking a layered approach with packetized data transfers, it lets you use various physical mediums, such as copper or optical. Moreover, it's compatible with the current PCI specification.

The performance of a PCI-E device is characterized by the number of "lanes" it has. Each lane can carry throughput of 250 MB per second (excluding overhead) in either direction. PCI-E cards initially will come in various configurations based on number of lanes--in 1x, 4x, 8x and 16x configurations. A 16x configuration, for instance, will have a total throughput of 4 GB per second, excluding overhead.

Because PCI-E is a point-to-point serial connection, total bus bandwidth is no longer an issue. Each card is guaranteed its bandwidth, and you don't have to worry about oversubscribing as with PCI and PCI-X. Unfortunately, few PCI-E cards and slots are available so far.Playing the Slots

So what about your current 64/66 PCI and PCI-X 133 cards? Don't worry--server-hardware manufacturers will include PCI-X slots as an interim to PCI-E. Look for PCI-E-compliant servers by year's end, probably at first with more PCI-X slots than PCI-E. Subsequent models will come with more PCI-E slots as PCI-E cards become more common. It'll take about three years for server vendors drop their PCI-X slots altogether.

Most early PCI-E cards will include a PCI-to-PCI-E bridge chip, since today's silicon designs are centered around PCI and PCI-X. Most native PCI-E cards are slated for the first quarter of next year.

Meanwhile, the only bus that comes close to the bandwidth of PCI-E is the AGP (Accelerated Graphics Port) 8x bus. AGP is an Intel spec for handling the huge data-movement requirements of the graphics card industry, namely CAD workstations and 3-D modeling. The AGP 8x port's throughput is 2 GB per sec.

Eventually, most graphics cards will go in PCI-E 16x slots, which will eliminate a special-purpose slot and connector. PCI-E 16x should fill the needs of high-end graphics cards, providing three times the power of AGP 8x. So beware: The AGP cards you buy today for your CAD workstations and other advanced graphic workstations may be obsolete in the future.So what's the next server bottleneck location with PCI-E? Probably memory speed and memory interconnect. One-to-one clock speeds between memory and CPU clock cycles are a way's away.

Infiniband, meanwhile, is no longer an I/O technology the average enterprise should consider adopting. This channel-based interconnect architecture for servers has been largely ignored by mainstream computer and networking vendors because PCI-E and improvements in interconnect technology have rendered it moot. Intel, for instance, has abandoned its Infiniband development, and several of the original Infiniband start-ups have either merged or gone under. It's not that Infiniband was a flawed technology--it just suffered from poor timing with PCI-E's emergence. Although Infiniband is no longer a viable fit for the corporate data center, it does make sense in high-end supercomputing clusters.One of the confusing things about PCI-E is that it's both an expansion and an interconnect specification. In the past, there were strict lines between chip-to-chip architectures, such as HyperTransport and RapidIO, and expansion technologies, such as PCI. PCI-E blurs the line.

Intel is working on chipsets that communicate between each other and the processor via PCI-E. This eliminates the need for bridging between the expansion slots and the chips on the motherboard. So why not just go with PCI-E for interconnect? PCI-E isn't the only game in town: HyperTransport and RapidIO still have a major following. HyHAMD, the creator and a big supporter of the HyperTransport protocol, has built HyperTransport into its Opteron and Athlon 64 line of processors as a standard chip-to-chip interconnect. IBM and Motorola are pushing RapidIO, as are networking companies such as Alcatel and Lucent.

In addition, HyperTransport 2.0 supports mapping to PCI Express, which eases integration. Intel is making chipsets that are natively PCI-E, and systems featuring native PCI-E for interconnect as well as expansion will hit the market in the second half of this year.

The RapidIO interconnect specification is maintained by the RapidIO Trade Association. This spec is largely used for backplane I/O on an embedded system and for most interconnect chip-to-chip functions. It scales to 10 GB and has recently been expanded to an additional compatible version of RapidIO called RapidFabric. The RapidFabric spec covers low-cost options for data-plane applications in networking and telecommunications. RapidIO will come in many embedded systems, but has no impact in the PC server and workstation field.The HyperTransport Consortium oversees the HyperTransport spec. AMD had committed to HyperTransport, going so far as to make it the communications protocol for its Opteron and Athlon 64 line of microprocessors for servers and workstations. Chipset manufacturers such as VIA and SIS are beginning to show off full HyperTransport versions of their chipsets for AMD processors. So if HyperTransport 2.0 comes with the servers or workstations you buy, you can use both technologies, since HyperTransport maps to PCI-E.

But don't sweat the interconnection options. Chip-to-chip technology will get sorted out by the vendors. The bigger issue for enterprises is server expansion. There's no doubt PCI-E is the heir apparent and successor in server expansion, and hopefully, its reign will be as long and productive as that of its predecessor. So with PCI-E a reality next year in servers, understanding the technology as well as your choices will ensure you make the right purchasing decisions.

Steven J. Schuchart Jr. covers storage and servers for NETWORK COMPUTING. Write to him at [email protected].

  • RapidIO Trade Association

    PCI Express (PCI-E) comes in various configurations, and it's easy to get tripped up by the terminology. So here are a few practical examples of the bandwidth of some current and future PCI-E interface cards:

    A Gigabit Ethernet card uses about 100 MB per second, for instance, and a 2-Gb Fibre Channel card uses roughly 200 MB per second. Theoretically, a 4-Mb Fibre Channel card uses 400 MB per second. 10-Gb Ethernet and the upcoming 10-Gb Fibre Channel specifications will use up to 1,000 MB, or 1 GB per second, in transfer rates. A 16x PCI-E configuration, meanwhile, performs at a theoretical 4 GB per second.

    There, numbers can be confusing. For instance, a 16x configuration is a 4-GB connection because the interface can perform 4 GB in each direction. But some companies and the PCI-SIG call this an 8-GB-per-second connection, with 4-GB per second in each direction. (Why? Marketing.) It's important to consider this when reading card and server vendor specifications: Some use the PCI-SIG's "total bandwidth" definition, and others the symmetrical bandwidth number.Nothing's wrong with either definition, but it can cloud the PCI-E picture, especially if you're trying to make an educated buying decision. Just remember that each lane is worth 250 Mbps in both directions. That way, you can calculate if you have enough lanes.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights