Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Nanosecond Storage Performance Latencies? Get Ready

In its natural state, nitroglycerin is extremely sensitive to shock and prone to unexpected explosions. Alfred Nobel invented a process to stabilize this volatile compound into a more practical form -- dynamite -- and helped usher in a new era of development by enabling hydropower, oil exploration, transcontinental railroads, and many other innovations of the Industrial Revolution. 

Likewise in the enterprise, the inability of random access memory (RAM) to retain its contents in the event of power loss precluded its use for primary data storage, despite its high performance characteristics. But the enterprise is finding its stabilizing element.

The current generation of Intel Xeon processors are able to support up to 1.5 TB of memory each. In true "virtuous cycle" fashion, VMware recently announced support for up to 12 TB of RAM per host in its flagship product, vSphere 6, to take full advantage. Driven by the possibilities afforded by this trend toward expanding server memory footprints, independent software vendors are making an effort to harness the potential of this resource to increase application performance. However, most attempts up to this point have also significantly affected IT operations and services.

There has been a range of different attempts to solve this problem, with varying results:

  • Memory-caching libraries: Implementations using memory-caching libraries are able to use vast amounts of memory to accelerate data. Unfortunately, this method requires the user to change the application, which clearly isn't a walk in the park and limits its reach.
  • In-memory applications: Some vendors embraced the large memory trend early on and did the heavy lifting for their user bases. Did they solve the volatile nature of memory? Unfortunately not! For example, although SAP HANA is an in-memory database platform, logs have to be written outside the volatile memory structure to provide ACID (atomicity, consistency, isolation, durability) guarantees for the database transactions to be processed reliably.

    In fact, SAP recommends using local storage resources, such as flash, to provide sufficient performance and data protection for these operations. Virtualizing such a platform becomes a challenge as mobility is reduced due to the use of isolated server-side storage resources which impede the operations and clustering services that virtualized data centers have relied on for almost a decade.

  • Distributed fault tolerance memory (DFTM): Solutions that enable DFTM allow every application in the virtualized data center to benefit from the storage performance potential of RAM with no operational or management overhead.

In many ways, the introduction of DFTM solutions is comparable to the introduction of vSphere High Availability (HA). Before vSphere HA, the architect had to choose between application-level HA capabilities or clustering services such as Veritas Cluster Server or Microsoft Clustering Services with each solution impacting IT operations in their own way.

vSphere HA empowers every virtual machine and every application by providing robust failover capabilities the moment you configure a simple vSphere cluster service. Similarly, DFTM solutions defuse the volatile nature of memory by providing fault tolerant write acceleration, synchronously writing copies of data to acceleration resources on multiple hosts to protect against device or network failure.

The net effect is that you are able to get predictable and persistent microsecond storage performance. Further, with new developments popping up in the industry every day, it is not strange to wonder when we will hit nanosecond latencies for storage performance. When the industry can ponder the possibility of these types of speeds, we can absolutely and fundamentally change what applications expect out of storage infrastructure.

Application developers used to expect storage platforms to only provide performance in the millisecond range. This hindered innovation: Lack of storage performance was perceived as a barrier that prevented code improvement beyond a certain point. For the first time, storage performance is not the bottleneck and, with memory as a server-side acceleration resource, extremely fast storage is affordable.

Now the real question becomes: What if you can have a virtual data center with millions of IOPS available at microsecond latency levels? What would you do with that power? What new type of application would you develop, and what new use cases would it enable? If we can change the core assumption around the storage subsystem and the way it performs, then we can spur a new revolution in application development and open a new, exciting world of possibilities.