Everyone on Wall Street wants to eliminate data latency. But data doesn't sit in one place for long. It passes among market participants, through myriad network switches, servers and applications.
Where in your IT infrastructure will a technology upgrade or change have the biggest impact on data latency? Will colocating servers to an exchange make all the difference? How about installing InfiniBand? Is buying a data caching solution that provides faster access to local data or installing hardware accelerators in routers, workstations and servers the solution? According to Tom Price, senior analyst at TowerGroup, colocation, faster internal networks and a state-of-the art client-side messaging transport have the biggest impact on reducing latency.
At the SIFMA show this week, a wealth of hardware and software vendors are telling a slightly different data-reduction story.
On the networking side, Cisco Systems (booth #2401) and Voltaire (booth #1762) are showing their InfiniBand switches. In March, a report put out by the Securities Technology Analysis Center in Chicago compared running Wombat market data feed software on a Cisco InfiniBand network versus a gigabit Ethernet. According to the report, the use of InfiniBand reduced mean latency by about 63 percent; reduced standard deviation by about 34 percent; and reduced outliers caused by request bursts, cutting roughly 30 to 35 milliseconds off of spikes.
The key feature of InfiniBand that helps it reduce latency, according to Pramod Srivatsa, senior product manager at Cisco, is remote direct memory access (RDMA), a concept whereby two or more computers communicate directly from the main memory of one system to the main memory of another. As there is no CPU, cache or context switching overhead needed to perform the transfer, and transfers can continue in parallel with other system operations, this is useful in applications where high-throughput, low-latency networking is needed, he explains. Cisco offers 20 gigabit-per-second InfiniBand.