Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Intel, Fusion-io Embrace Open Compute

The Open Compute project shows us what can happen when a group of large customers with deep pockets takes control of the design of their own hardware. At last week's Open Compute Summit, Intel and Fusion-io demonstrated their commitment to the project by announcing new products. Fusion-io went a step further by releasing a product design into the wild world of open source.

The basic approach behind the Open Compute project is to cut the cost of running a hyper-scale datacenter by reducing both the complexity of the hardware and the number of redundant devices. If you're running 10,000 servers to support your SaaS application, for example, you can use servers without GPUs, since no one ever looks at the console, and use 3 power supplies in the Open Rack rather than 2 power supplies in each server.

Intel announced a new high-speed optical interconnect technology that uses silicon photonics rather than the exotic materials currently used to build 100Gbps, or even 10Gbps, optical transceivers today. This new interconnect can transport PCIe, 100-Gbps Ethernet, or other traffic at 100-Gbps today through personality modules. Higher data rates are expected in the future over multimode fiber for data center distances.

Silicon photonics lets Intel build and test the combined silicon and optical components with totally automated equipment at the wafer level, so it can accept the good chips and reject the bad ones early in the process--before the company has invested in cutting and packaging them into optical modules.

On stage at the summit, Andy Bechtolsheim explained that today's 100-Gbps conventional optical transceivers are essentially hand-made, with skilled workers aligning the lasers. These transceivers can't be tested until they're almost completely assembled. As a result, for example, Cisco's current short-range 100-Gbps optics sell for more than $10,000 each, keeping 100-Gbps interconnects too expensive for most applications.

While Intel isn't yet talking about price or delivery dates, the new silicon photonics interfaces should be significantly less expensive, thus making high-speed interconnects over skinny little cables practical. Asian ODM (original design manufacturer) Quanta was showing a mechanical mockup of the new Intel interconnect in an Open Rack configuration on the show floor.

Flash Fusion

The other big announcement came from Fusion-io, which not only debuted a new ioScale flash card, but also announced it would "open source" the design so other organizations could build ioScale cards or variations themselves. This opens the door for the ODMs to build Fusion-io flash into the motherboard or into hot-swappable modules like the PCIe SSD Micron makes for Dell's latest servers.

The ioScale, and in fact Open Compute in general, was designed specifically for Web 2.0 data centers where resiliency is provided by software. In his presentation at the summit, Fusion-io CEO David Flynn pointed out that the ioScale was designed to "commercial, not enterprise, levels of reliability," saying that rather than over-engineer the card, the company relies on the application layer to deal with the occasional failure.

That said, the ioScale is still an impressive card. It supports up to 3.2 Tbytes of accessible flash on a half-length PCIe card while still offering all the software interfaces Fusion-io is known for, including disk emulation, atomic writes and its directFS. It's also a bootable resource if used in a motherboard with a UEFI BIOS, which I believe is a first for the company. The relatively low price of $3.89/G byte, or roughly $12,500 for a 3.2-Tbyte card, is also new for Fusion-io. In his presentation, CEO Flynn offered a 30% discount for cards to be used in Open Compute-compliant servers. While that's a great deal for evaluators, real Open Compute users buy in sufficiently large quantities that they'd get big discounts anyway.

While it may be a few years before corporate America is buying Open Compute-style equipment, with rack-level power supplies and hyper-dense compute, the customer-driven design and rate of innovation that the Open Compute project has created will generate good stuff for the rest of us.