Ultra-high performance networks are moving from the fringes of network deployments to becoming more and more mainstream. These environments, often data centers, are characterized by the need to collect, process, render and move massive amounts of data at astronomical speeds. They are characterized by the three issues of hyperconnectivity, hyperperformance, and hyperscale.
Hyperscale Use Cases
These environments span a growing range of uses cases that rely on the use of big data and low-latency connections to successfully compete in today’s digital marketplace. Here are just a few use cases:
The pharmaceutical industry: Pharmaceutical researchers leverage big data-based simulations to measure the effectiveness of new drugs while ensuring reduced risks. Geological surveys searching for new oil and gas resources require processing a vast amount of data collected from field sensors to render 3D models for analysis. Aerospace industries need to perform advanced modeling to stress test components without spending millions of dollars on prototypes. Known as “elephant flows,” they rely on enormous, hyperscale environments because they exceed the limits of traditionally scaled data centers.
Online gaming and e-commerce: Similarly, online gaming and dynamic e-commerce websites have to handle previously unprecedented levels of connections and transactions per second in real-time. In addition to their predictably massive streams of data, however, these connections also often include traffic bursts that often need to be offloaded to hardware for additional processing. In these environments, user experience is paramount, and disrupted services can impact customer loyalty and result in a significant loss of revenue.
Financial services industry: Many modern financial institutions now leverage hyperscale virtual services architectures to support super-fast communications between massively-scaled services – such as compute, storage, and applications – that are co-hosted on both physical and virtualized platforms. These time-sensitive transactions require single-digit microsecond latency, especially in high-frequency trading environments that require maintaining extreme numbers of simultaneous flows, in order to maintain user experience and profitability.
Hyperscale Will be the Norm
But these are quickly becoming the rule rather than the exception. More and more mainstream organizations are also turning to hyperscale to meet new business requirements. Organizations with large numbers of end-user and IoT devices, for example, now need to support new 5G networks for things like real-time carrier-grade Network Address Translation. And hyperscale will also be required to support new 5G-enabled devices and edge networks that leverage hyperconnectivity and high-performance bandwidth to generate and stream new forms of rich media in ultra-high definition.
Custom Hardware is Needed to Address Hyperscale Issues
To address these challenges, many of the leading visionaries, such as Amazon Apple, Google, and Microsoft, have filled their devices and data farms with specially enhanced application-specific circuits to enable the hyperscale, hyperspeed, and hyperconnectivity that a growing number of organizations require, and that will quickly become mainstream as enhancements such 5G transform what’s possible even further. And it’s not just the extreme edges that rely on custom silicon. Even the hardware used to process video relies on custom GPUs (Graphical Processing Units) to keep pace with ultra-high definition (UHD) streaming video and gaming.
The one area that is being left behind is security. Unlike innovators in other spaces, nearly every security vendor has failed to fulfill the work needed to close the performance gap between their solutions and the demands of today’s digital businesses. Instead, they doggedly continue to rely exclusively on off-the-shelf hardware to manage their resource-intensive security functions.
The irony is, even inspecting normal traffic requires much more processing power than simple routing and switching. This is why security devices are often stuffed with high-end off-the-shelf units and part of the reason why they can be so prohibitively expensive. But even then, far too many security solutions only deliver low performance and high latency, which impacts business agility, productivity, revenue, and return-on-investment. Inspecting things like encrypted traffic, which according to a recent Google Transparency Report, comprises between
80 percent and 90 percent of all internet traffic, takes such a toll on conventional processors that many vendors refuse to even publish their performance numbers.
As a result, many of the organizations that rely on the more extreme use cases cited previously have had to make compromises to securing their data and transactions – relying instead on things like home-grown access-control lists (ACLs) to provide protection. Meanwhile, their data can still be vulnerable to a wide variety of attacks, such as the surreptitious injection of bad data, which can skew test results, poison AI learning systems, and put businesses and even individuals at risk if flawed prototypes go into production based on compromised data.
Without Custom Hardware, Security Vendors Put Everyone at Risk
This situation should simply not be tolerated. The failure of the security industry to keep up puts our entire digital economy at risk. Organizations need to look for security vendors that innovate and solutions that don’t compromise on security or performance. They need to seek out vendors who are actively working to provide solutions that offer enhancements to both. In a time when hyperperformance requirements are becoming mainstream, and cloud platforms enable such massive hyperscalability, security cannot afford to lag behind. Leaving critical processes to run in the open – or at best, barely protected by things like access control lists – is a devil’s bargain that no one should be forced to make.