“Modular interconnect PCIe has promise, but it is very early to begin talking about it as an alternative to SANs,’’ notes Jim Bagley, a senior analyst at Storage Strategies NOW/Systems Strategies NOW. “The virtualization software layer necessary to do this is a big undertaking, and there are a number of alternative methods.” However, he notes that there has been activity in the PCIe space recently with Micron purchasing PCIe virtualization company Virtensys, among other initiatives.
Network Computing blogger Howard Marks said that at the recent Storage Networking World event he attended, one of the coolest things he saw was an innovative way to package flash, using a 2.5-inch PCIe SSD from Micron that Dell and HP can use in their Romley servers. “They’ve squeezed the PCIe connection into the SAS/SATA connector so server vendors can make a few of their hot-swap drive slots into multi-purpose hot-swap SAS/SATA/PCIe slots with 4 x2 PCIe lanes,’’ Marks wrote.
Jim Handy, an SSD analyst at Objective Analysis, calls PCIe “an interesting and complex issue.” He says he has spoken with a number of companies that were confronted with the decision of whether to add a SAN to the tune of $400,000 and up, but instead elected to try using a $7,000 PCIe card. Those companies “found that they got a huge amount of headroom that the SAN wouldn't have provided. Trying this takes a leap of faith, though,’’ he cautions. “You can't run some calculation to tell you that it’s going to turn out that way. If you could, the SAN companies would all be in enormous trouble right now.”
Handy believes SAN providers will face some challenges as this technology finds its niche. “Successful ones will adopt flash to provide their customers with better price/performance points,’’ he says. “Some may miss the boat as they attempt to defend their turf.”
And, customers faced with I/O performance bottlenecks are finding that they can move past this problem by migrating from expensive architectures to cheaper ones—a move Handy says is not at all intuitive. “Over the long run, the storage performance continuum will be reset to a new trendline that offers higher performance at a lower price,’’ he says. “What a boon to data center managers--especially at a time when data requirements are ballooning.”
The biggest drawback of PCIe networking, says Bagley, is the limited distance that the bus architecture can support. “Improvements in this area will take some time to develop, for example, PCIe over optical cable.” Additionally, he says, “the lack of a standards-based virtualization layer between servers means that early adopters are going to have to home-brew a lot of the applications and management functions.”
In terms of what PCIe brings to the table, it has basic functions like capacity reporting and operational features like replication and deduplication, Bagley says—management features that are taken for granted in SANs, but will need to be added in Network Attached Storage (NAS).
“With the availability of 10Gb Ethernet fabric and multiport adapters and switches for reliability, with higher speeds in the pipeline, it will take time for PCIe networking to take hold,” says Bagley.
Learn more about Research: State of Storage 2012 by subscribing to Network Computing Pro Reports (free, registration required).