Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Latency Busters: 6 Technologies To Watch

  • Who cares if Moore’s law is fading? Often it is latency that’s the gating factor in the end-user experience, not CPUs. A captivating assortment of latency-reduction approaches have emerged or continue to make an impact in juicing up data-center performance by shortening round-trip speeds among system components.   

    The timing couldn’t be better. Pervasive Fibre Channel and Gigabit Ethernet links are showing their age, and  IT managers are aware that they may become tomorrow’s bottlenecks in a big data, IoT and cloud-driven world. 

    Here are six latency busters to keep an eye on as you consider the inexorable need for speed in the data center.

    (Image: Vladimer_Timofeev/iStockphoto)

  • iSER

    Short for “iSCSI Extensions for RDMA,” iSER is championed by Mellanox, the developer of RDMA-based networking. iSER is an interface that uses Ethernet to carry data directly between server memory and storage devices. The protocol eliminates memory copies (a.k.a. memcpy), bypasses the operating system TCP/IP stack, and frees up the CPU of the target system.  Microsoft reportedly described the technology as delivering “40Gbps of I/O with 0% CPU” for Azure.

    Because iSER uses standard Ethernet switches and cables, it’s low cost and its price/performance ratio is far beyond Fibre Channel and InfiniBand. Its throughput approaches wire speed, 40 Gbps, and even 100 Gbps. Because iSER is faster than iSCSI, FC, and FCoE – and easier to manage than the SCSI Remote Protocol (SRP) – storage vendors are applying it to help customers do more with less.  

    (Image: Mellanox)

  • NVM

    Non-volatile memory (NVM), which approaches the speed and random access of DRAM yet with the persistence of hard drives, shows successive write-latency improvements down to 30-50 microseconds. The ensuing I/O improvements has led to NVM’s  integration into Oracle Exadata, Violin Memory SSD products, and HPE's persistent memory technology.  Plexistor’s use of NVM in a file-system solution reportedly reduced latency to less than a tenth of typical file systems today. NVM’s low latency has also enabled EMC XtremIO and Pure Storage to offer performance not previously possible or affordable.

    Intel and Micron’s new 3D XPoint chip has just 12% of the latency of flash, per Intel testing reported in January 2016, lasts 1,000 times longer and performs 1,000 times faster than today's NAND memory.

    (Image: SNIA)

  • Intelligent caching

    Content delivery networks (CDNs), perhaps the most popular of intelligence-caching schemes where edge servers keep content local, continue to expand and will carry nearly two-thirds of Internet traffic by 2019.  Approximately 62% of all Internet traffic will cross CDNs by 2019 globally, up from 39% in 2014.

    Related external caching applications such as storage gateways for use within data-center applications are also expanding, where certain storage resources can be maintained locally so latency-sensitive applications such as databases can function optimally.  

    Internal caching promotes “hot” data to fast media, such as the NVM Express (NVMe) specification for accessing SSDs via the PCI Express (PCIe) bus. Considered the industry’s fastest I/O access, NVMe boasts far less latency than SATA and SAS devices. DataDirect Networks (DDN) uses NVMe to reduce latency by eliminating file-locking contention inherent in parallel file systems.

    (Image: DDN IME)

  • WAN optimization

    WAN optimization seeks to accelerate long-distance network transfers by compressing and staging data to help speed it along.  Even though it’s been around for a while, new approaches continue to keep it a powerful way to lower latency.

    For example, Riverbed, in conjunction with Akamai, offers “aggregated optimizations” such that hybrid cloud traffic that traverses the Internet can avoid bottlenecks.  Other approaches eliminate intermediary rearrangement or fragmentation of data for a performance boost, or optimize the routes along which the data travels.

    (Image: geralt/Pixabay)

  • Docker

    Containers such as Docker make it far easier to run enterprise-grade services and address latency as well as data portability, scaling, processing, performance, and extensibility. Typically, containers perform a task on the data stored locally within them with minimal latency. When the container disappears so does the data, which is fine unless the data in the container has to persist.  New approaches offer persistence and performance by placing the compute right inside cloud storage, thereby offering users practically zero latency between the compute and the storage in the cloud. The data doesn’t have to travel over a network that connects servers and storage if the application is running inside the storage. Thanks to the tight linkage with the storage, Docker can be automatically launched when needed such as when a new file is added.

  • In-memory databases

    By moving CPUs closer to data, and thus lowering data-movement latency, in-memory databases allow applications whose working sets fit within onboard system memory to run transactions and analytics in real time.  A 1 million IOPS (I/O operations per second) database may not be able to move that much data that quickly between storage/compute, but an in-memory database can.  With some replication assistance, an embedded in-memory database can query a traditional database and further eliminate latency and network traffic.

    (Image: ClkrFreeVectorImages/Pixabay)