Supplies Network, a privately-owned wholesaler of IT products, decided to roll out a virtual I/O product as part of its ongoing efforts to modernize its data center. The company has virtualized all its servers and is building out a private cloud infrastructure. The goal of this modernization is to boost scalability and performance while cutting costs and complexity, so the last thing the St. Louis-based company wanted was a complicated, expensive cabling infrastructure to connect its private cloud, explains Dan Shipley, data center architect for Supplies Network.
The company considered several options, including alternatives based on Fibre Channel over Ethernet (FCoE), but went with a virtual I/O package from Xsigo Systems. "Traditionally, you need different cables for different networks on the Ethernet side, with ten to twelve Gigabit Ethernet or Fibre Channel connections on the back of each server," says Shipley. "All of these cables, switches, ports, etc. are complex and expensive." He says that when the company looked at how it wanted to build out a cluster of virtualized servers, a key design element was a consolidated wiring structure.
Xsigo offers its I/O Director, a switch-like device that's designed to make a single network connection appear to be multiple virtual NICs or Host Bus Adapters (HBAs). Xsigo's technology offloads NIC emulation to the I/O Director. Each server has one physical InfiniBand card (or two for redundancy), which the I/O Director can make appear to be multiple InfiniBand, Ethernet or Fiber Channel cards. Each InfiniBand card supplies 10Gb/s of bandwidth that can be dynamically allocated between network and storage requirements, according to Xsigo. Meanwhile, multiple Ethernet and Fibre Channel connections are replaced by a thin InfiniBand cable. The I/O Director can connect to the SAN, LAN, and IPC networks.
Supplies Network's virtualized server deployment consists of 16 Hewlett-Packard DL380 and DL360 servers, VMware's vSphere software, and a NetApp SAN for storage. Because NetApp's SAN does not offer native InfiniBand support, Supplies Network relies on an external InfiniBand interface to connect it to the private cloud. Shipley says he and his staff considered the pros and cons of different solutions. "InfiniBand has been around for a while, it has been used in high-performance computing and super data centers. It's standardized protocol stack is mature, and runs at 40Gbit speeds today," he says. FCoE, however, "is still in a state of flux and we are worried about vendor interoperability and vendor lock-in." There were other concerns, too. In particular, Shipley says that with an FCoE link, storage traffic always takes precedence. "You can't say on an FCoE link, use only 4Gbs of fiber for storage and use the rest for streaming video," he says. "We were concerned about that." Shipley points to other pros of Infiniband: native support of remote directory memory access (RDMA), and its lower latency compared with FCoE. Shipley also believes the security is stronger.
Bandwidth, scalability and flexibility are important to Supplies Network, which has more than 5,000 products in its catalog, runs four distribution centers to supply products nationally, and offers managed print services, among other things. Xsigo's InfiniBand fabric can scale beyond 2,000 nodes and provides performance up to 40Gb/s per link, according to Xsigo. Xsigo's open fabric connects to widely-available server I/O cards and blade components from leading vendors including Dell, Hitachi, HP, IBM, and Supermicro. Xsigo says its technology also includes the flexibility to change and expand I/O configurations on the fly in response to new requirements.