Passing Packets: Net Traffic Under Ever More Scrutiny

Network managers now want to make all kinds of decisions based not on the source or destination addresses of a packet, but on its content. to impose this task high

March 2, 2004

8 Min Read
NetworkComputing logo in a gray background | NetworkComputing

One of the design goals of the Internet Protocol was easy routing. The router only had to look at address information in the packet header to determine what to do with the packet. The whole process was designed to be payload-independent: The routing device did not even need to know what type of content the packet carried, or how it was encoded.

My, how times have changed.

Network managers now want to make all kinds of decisions"many of them Draconian"based not on the source or destination addresses of a packet, but on its content. Spam must be recognized accurately and filtered out, or the growing impact on knowledge-worker productivity will be so large that it will have to be discussed in the annual report. Not to mention that some types of spam could expose the enterprise to legal liability.

Viruses, worms, Trojan horses and many other types of executable code are to be excluded. Some types of technically harmless but emotionally objectionable content are to be shut out as well. And, increasingly, there are institutional policies to be followed, about not only what may come into a network but also what may leave it.

The only answer is to inspect the content of individual packets-even if they are compressed, camouflaged or encrypted-and to make decisions based on what is found there. But to impose this task high in the network, at an access point, an enterprise gateway or even on the backbone itself, is to accept crippling performance demands. Vendors are now responding with innovative hardware accelerations to meet the challenge, claiming wire-speed deep-packet inspection is not only an achievable goal, but a reality."I think the Mydoom worm once again focused everyone's attention on the problem of invasive critters," observed Greg Brown, director of network security products at accelerator vendor Tarari Inc. (San Diego). Yet, MyDoom proved relatively easy to detect. "It had about seven or nine keywords, one of which always appeared as the name of an executable attachment," Brown said. It was simple to write a scanning routine to detect it.

But simple scans for a single fixed pattern aren't the problem. These can be done today, often in software, at only moderate cost in throughput if they are performed close to the network termination. The problem comes from a number of factors: compounding, uncertainty and concealment.

Evil fragments
Compounding is easy. Mydoom, for instance, isn't the only thing for which anti-virus screens must scan. There is myriad evil fragments, each with its own characteristic pattern or patterns. When the scanning is done in software, it is extremely difficult to keep the scanning time from growing at least linearly with the number of patterns to be matched. But by stating the patterns as regular expressions, it is possible to compile a set of regular expressions into a single-pass scan algorithm that reduces the rate of growth.

Uncertainty is less easy to deal with. Some objects of detection don't contain a single unambiguous pattern, but must be detected by a sequence of inferences based on partial patterns. Here again the power of regular expressions comes into play, but with less ease. As anyone who wrestled with Grep in a Unix class can attest, finding a regular expression to cover a range of disjointed possible patterns is not trivial. And again, with many scanning solutions, even if it's possible to handle something more than a simple match, the search time rises rapidly with the complexity of the pattern.

And the patterns can be exceedingly subtle. "One of the big markets right now is in the servers of European cellular service providers," said Tarari vice president of marketing John Bromhead. "Now that they have deployed handsets with graphics capability, spammers are sending pornographic images to people's cell phones. There's a rather urgent project to figure out how to scan packet data for flesh tones."Concealment is a potentially worse problem yet, particularly because it is generally caused by the user, not the invader. At the point where they must undergo inspection, many packets today have their payloads scrambled in some fashion that makes them unavailable for inspection.

"All browsers today can handle compressed data, for instance," said Bromhead. "So the top sites-the ones that carry the most traffic-are using compression to keep their throughput up. I saw a recent study that said at the top sites, 29 percent of the traffic was compressed data."

That means the inspection site either has to be able to scan compressed data for patterns-a quite difficult task-or must decompress the packet payloads before scanning. "But decompression all by itself can cause a performance hit of 20 times on these boxes," Bromhead warned.

Eventually it may become necessary to scan secure packets as well, which may force security checkpoints to participate in key exchange and to decrypt packet data in transit.

A potentially even more complex problem than spam and viruses is emerging in outgoing traffic. Increasingly, organizations are creating policies to prohibit certain kinds of information from leaving the organization's network. Whether it be trade secrets or personal e-mail, such data is generally so loosely defined that it challenges even the most flexible scanning. Yet policy enforcement, as the job is known, could be one of the biggest drivers in reshaping the access network in the next few years.Hardware strikes back

Not surprisingly, a desperate need for throughput on a complex algorithm elicits responses from hardware architects. But this one is a bit different. The sheer variety of patterns that must be sought is daunting. Their volatility, even from hour to hour in some cases, appears to preclude a hardware-based solution. Yet acceleration is clearly necessary.

Tarari addresses that problem with a proprietary chip called a Content Processing Engine (CPE)-essentially a dense multiprocessing chip with individual processor elements optimized for packet-based operations. The company provides the chip only at the subsystem level, packaged as two chips and supporting memory and interface hardware on a single board.

Software is available from Tarari to equip the engines for a number of tasks, including decompression and decomposition, simple scanning and XML parsing. Its most recent software release is a regular-expression processor for the CPE. This allows the security manager to create a set of Posix 1003.2-compliant expressions, which are then compiled into a single-pass code for the CPE. Individual pattern segments can be switched on and off without re-creating the entire set.

The Tarari board is intended for inclusion in a larger system. But system-level solutions are hitting the market as well. An integrated security system for enterprise networks is being shown this month by iPolicy Networks Inc., a company representing the merged assets of Duet Technologies and Tunnelnet Inc. Originally intended for carrier customers offering managed security services, the Fremont, Calif., company's ipEnforcer is now being introduced in special versions for the enterprise and, eventually, even for remote telecommuters using cable modem or DSL links.Prabhu Goel, iPolicy's chief executive officer, said the original platform developed for carriers used 14 separate network processors in a 2U system. For the new enterprise platforms, iPolicy has turned to a Pentium-based programmable system where all security software operates at the kernel level.

"What we didn't change is our commitment to programmable architectures rather than ASICs," said Goel. "Many security access specialists try to rely on ASICs, but when you want to do multiple packet inspections at Layers 4 through 7, you need the advantages of programmability."

The assumption driving design of the ipEnforcer was that corporations and carriers alike would need multiple layers of security, preferably implemented in one system residing next to the router. The platform would have to act as deep-inspection firewall, intrusion-detection and -prevention system, virtual private network, anti-virus controller, content-filtering engine, surveillance system and assessment platform for network vulnerabilities.

These tasks can't be handled sequentially, Goel said, which is why the company's founders came up with the concept of a single-pass inspection engine with submillisecond latency. The system uses an adaptive rules-based framework built on a decision-tree analysis architecture. Because so many security threats involve simple yes-no dichotomies, Goel said, a decision tree can be very, very broad, yet shallow enough to allow resolution of problems at wire speed. For the fastest system, the 6500, packet throughput is 5 Gbits/second.

Tiered management
As important as the parallel packet analysis carried out by the system is the tiered centralized management allowed by the software. Multiple administrations in separate security domains can be defined by either location or job function. For security correlation and enforcement in a Web-services environment, iPolicy uses the Simple Object Access Protocol to perform behavioral anomaly-response analysis.

The ipEnforcer relies on a couple of licensed tools. Since it is not intended as a file scanner for anti-virus applications, iPolicy has licensed anti-virus tools from McAfee/Network Associates and the URL database (but not the scanning tool) from SurfControl.

Even as the first systems reach the market, the future is closing in on them. Spammers and virus writers haven't really had to think up ways to escape deep-packet inspection before. Nor have corporate executives, presented with a new form of power, really thought about how much fun policy creation could be. The problem is only going to get more complex.

And the definition of wire speed is going to get faster. As these capabilities become more recognized, there will inevitably be pressure to not just instantiate them in firewalls and enterprise gateways, but in enterprise backbones as well.

At that point, wire speed may mean optical speed-a concept that National Broadband, among others, offered some time ago. The Boulder, Colo., company envisions a national optical network to serve communities that lie apart from major urban clusters.Each of National Broadband's nodes would be an optical switch linking the network to a set of local point-to-point and multipoint fixed wireless networks. Security and policy engines would reside in the optical switches.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights