At first glance, VRTX is simply a four-blade chassis with quieter fans, so you can put it in the office instead of the data center. In reality there's more revolutionary technology in this box.
My problem with blade servers has always been the proprietary mezzanine cards the blades use. If you buy your blade system from Cisco, IBM, HP, Dell or even Supermicro, you've limited your I/O options to the small number of mezzanine cards your blade vendor either resells or endorses.
For instance, if you have Cisco UCS blades and want to add a Virident SSD because you love its FlashMAX Connect software, you're out of luck because Cisco only endorses Fusion-io flash cards.
VRTX breaks that mold. Rather than giving each blade one or two mezzanine card slots, the mezzanine cards on the VRTX blades connect to a pair of PCIe switch chips on the chassis motherboard.
Those PCIe switches are, in turn, connected to eight bog standard PCIe slots in the chassis: three full-height and five half-height and a shared PERC SAS/SATA RAID controller. That's all in a slightly oversize tower or with the optional rackmount form factor in a 5U package. The VRTX uses same M620 blades in Dell's data center blade chassis.
Each blade also has four 1-Gbps Ethernet LOM (LAN on motherboard) ports that can be connected to a switch in the back of the cabinet. Those who just have to use switches from their favorite networking vendor (cough, Cisco, cough) can put a pass-through module in the chassis instead of the switch-though that will only pass two of the three Ethernet ports from each blade.
Each of the eight PCIe slots can be assigned to any of the server blades, though the blade does have to be power cycled to recognize that it now has an extra PCIe slot. At launch, Dell only supports a limited set of PCIe cards--predominantly 1-Gbps and 10-Gbps Ethernet, plus a SAS HBA and AMD FirePro W7000 graphics cards. However, I expect Dell to add support for more cards based on customer demand.
And the truth is, other than storage HBAs, no one checks to see if a card is blessed by their vendor before plugging it in. Thus, I'm excited about the ability to use a wider variety of PCIe cards. That will not only let me use the PCIe SSD and network cards I like, rather than the ones Dell has packaged as mezzanine cards, but since the PCIe cards are exposed to the back of the chassis I can use arbitrary cards with unique connectors. For instance, I could use a four-port video card for digital signage or an Infiniband card.
Then there's the storage story. There are 25 2.5-inch or 12 3.5-inch SAS/SATA drive bays connected through a SAS extender to the shared PERC RAID controller on the chassis motherboard. The RAID controller is seen by all the blades in the chassis, and it can assign logical drives from the common RAID pool to the individual blades in the chassis. This makes for a robust storage back end.
More details about VRTX, and more photos, are available at Kevin Houston's blog.
Several bloggers were talking about VRTX at the show. While most of the discussion was around how a VRTX would, all by itself, make a perfect home lab, we did start to speculate about what the follow-on product should look like.
I for one would love to see a bigger, data center-oriented version. An eight- to 16-slot chassis with 10-Gbps LOM plus the kind of PCIe slot switching that's in VRTX would be a step up from the more conventional blade chassis with mezzanine cards.
I had a brief discussion with the Dell folks about how cool it would be if they went the next step and made the PCIe switching dynamic, like NextIO or Virtensys, and let multiple blades access an SR-IOV card in a shared PCIe slot, allocating the SR-IOV resources such as virtual NICs across multiple blades.
What features would you like to see in subsequent versions? And has Dell caught your interest with VRTX?