Review: Passive Vulnerability-Assessment Scanners

Passive vulnerability-assessment scanners provide a noninvasive view of network traffic. While neither product we tested impressed, we awarded our Tester's Choice to the one with better data-mining tools and accurate

October 1, 2005

13 Min Read
Network Computing logo

With all the detection technology available in network and host scanners, not to mention configuration- and patch-management tools, what's the use of yet another network discovery method? Passive vulnerability assessment takes a unique approach: In monitoring network traffic, it attempts to classify a node's operating system, ports and services, and to discover vulnerabilities an active scanner like Nessus or Qualys might not find because ports are blocked or a new host has come online. The data may then provide context for security events, such as correlating with IDS alerts to reduce false positives.

Passive VA Scanner Features

Click to Enlarge

Passive analysis offers two key advantages. The first is visibility. There's often a wide gap between what you think is running on your network and what actually is. Both network and host scanners report only what they see. Scanners are thwarted by network and host firewalls. Even when a host is live, the information gathered is sometimes limited to banner checks and some noninvasive configuration checks. If your scanner has the host credentials, it can query for more information, but false positives are a huge problem, and you still may not see everything. Further, rootkits that install themselves may run on a nonscanned port or, in the case of UDP, may not respond to a random probe. If an active vulnerability assessment scanner doesn't see it, it doesn't exist to the scanner.

Host firewalls are common even on server computers, so how do you detect a rogue server or laptop with an active scan? A passive sensor might see rogues if they're chatting on the network; that's visibility a scanner won't give you. A passive sensor also will detect activity to and from a port that isn't usually scanned, and may detect nonstandard port usage, provided the sensor can decode and classify the traffic. For example, simple flow analysis won't detect SSH or telnet on Port 80, but protocol analysis may.

The second major advantage of passive analysis is that it's noninvasive--it doesn't interrupt network operations. Active vulnerability assessment scanners are invasive and can disrupt services, despite their developers' efforts to minimize the potential for outages. Even using so-called safe scans, we've taken out switches, our NTP service and a host of other critical infrastructure components. Several years ago, we even bounced our core router twice with an nmap port scan. Oops.We tested SourceFire's Real-time Network Awareness (RNA) and Tenable Network Security Nevo, the only two passive vulnerability scanners currently available, in our Syracuse University Real-World Labs®. In both cases, we had to install the vendor's management console--SourceFire's Defense Center and Tenable's Lightning--to analyze the data from the passive sensors. Although RNA scored higher on our Report Card and received our Tester's Choice award, largely because of its accurate protocol discovery and Defense Center's excellent data-mining tools, we don't feel either product is accurate enough in discovering vulnerabilities passively to put much stock in--yet.

And neither product is cheap. For a 1,000-node network, RNA weighed in at a little over $25,000 for the sensor and licenses. Nevo costs $9,950, but you also need Lightning and node licenses, which bump the price up to $24,000 plus hardware. Both vendors are revving up their products, and new versions with marked improvements should be coming in late 2005. So hold off on that purchase for the moment.

SourceFire Real-time Network Awareness 1000 3.02

RNA is a mixed bag of hits and misses. In general, it detected our operating system with a fair degree of accuracy--no more or less than what we would expect with active scanning. RNA also detected protocols on nonstandard ports, including the use of SSL on high ports, and telnet and SSH on common ports, such as 80 and 25. But we were puzzled about why RNA didn't identify several Red Hat Enterprise Linux 3 systems and why it suddenly discovered hosts that had never been alive. SourceFire engineers said the source of the host-discovery problem might have been a device on our network. We're working with them to nail down the cause.

Vulnerability detection is much less accurate. RNA simply assigns all known vulnerabilities for an OS and servers, regardless of patch levels. However, RNA shone in reporting, and its query tools let us create finely tuned queries quickly and easily.

After a simple installation, RNA immediately started finding and classifying hosts. Rather than simply classifying hosts, operating systems and services once and sticking with that classification, RNA continually reassesses its classifications and assigns them a confidence level. The more RNA learns about a host, the higher the confidence it shows in its classifications. This is important because operating system and protocol detection isn't an exact science. RNA uses TCP fingerprinting to determine an operating system's identity and do protocol-detection analysis, but many factors can thwart accurate classification, including proxies, NAT/NAPT routers and TCP stack tuning. Patching hosts can also affect TCP stack behavior. SourceFire doesn't provide a way to manually classify a host in RNA, which would be useful where RNA doesn't properly classify operating systems or services. We tried to create custom fingerprints using the SourceFire tool by defining the IP parameters and then letting RNA attempt to discover the unique fingerprint, but we didn't succeed in getting a match.

RNA seeks to determine which protocols are in use through in-depth protocol analysis of all network traffic. Network services typically run on well-known ports--for example, 25 for SMTP, 23 for telnet, 22 for SSH and 443 for SSL. However, you can run any service on any port. Attackers use this capability to bypass firewall rules and detection devices. For example, by putting an SSH server on Port 25 or Port 80, they attempt to mask traffic as e-mail or Web traffic. Only by analyzing the protocol can you determine what's actually running on those ports.We tested RNA's detection capability by putting telnet and SSH servers on high ports (above 1,024). RNA correctly detected the new TCP services and had a rough idea--50 percent confidence--that the protocols were either telnet or SSH, depending on what we used. Continued use of the service increased RNA's confidence level. Putting a telnet or SSH server on a well-known port, on the other hand, yielded different results. Although RNA immediately detected the port usage, the protocol detection often took longer because RNA needed to see more traffic to analyze the protocol.

SourceFire has built in some great query tools that are relatively easy to use and, with some experimentation, intuitive. We would like to see some user interface enhancements, such as drop-down menus for selections and buttons for setting a time window on the results page. But aside from those minor nits, we were impressed. Filling in a query form, including a negate operator, is a snap, and the queries aren't case-sensitive--a nice touch. For example, it was simple to find all the hosts on our networks with unknown and unidentified operating systems, or all the systems running openssh with a version older than 4.0. Generating reports was equally simple using the query method and defining the various parameters. Reports can also be scheduled for automatic generation and viewing.

Tenable Network Security Nevo 2.0
Nevo, Tenable's foray into the passive-analysis market, detects operating systems, protocols and vulnerabilities, and yet it's very different from RNA. One of Tenable's goals is to reduce false positives, so Nevo is designed to detect vulnerabilities through passive analysis of traffic, rather than by just populating a table with all possible vulnerabilities based on operating system and server versions, as in the case of RNA. That doesn't mean Nevo is necessarily more accurate than RNA when reporting vulnerabilities, however. For example, Nevo didn't detect well-known vulnerabilities such as those of MSRPC-DCOM, Solaris telnet and dtlogin that we actively exploited, leading to false negatives. Tenable's response is that passive detection should be accompanied by targeted active scanning. But Nevo is far more effective than RNA at correctly detecting client applications, including Web browsers, media players and update agents.

Nevo uses a variety of methods to identify protocols, including banner grabs and protocol analysis. It detected our SSH servers running on high ports and properly identified the versions. However, it didn't detect telnet on high ports, so Tenable supplied a working detection signature. As a fallback, Nevo automatically detects interactive sessions based on packet behavior. It may detect other encrypted traffic by the randomness of the bytes in the packet payload--something RNA doesn't do--so looking for that kind of traffic can be useful. Unfortunately, Nevo doesn't decode common protocols used by worms, such as SMB and UNnP. The upshot is that protocol detection--and, therefore, vulnerability detection--are spotty.

Creating ad hoc queries with Lightning, the management console for the events Nevo sends, isn't easy. Unlike with RNA, there's no negate operator, which makes inductive analysis difficult. We worked around some of the search limitations by attempting various patterns and identifying a vulnerability ID and then searching for only that ID, but this wasn't an intuitive method. Without the ability to save queries for later use, we resorted to writing notes to keep track. Tenable is addressing many of these problems in an upcoming release.

Mike Fratto is editor of Secure Enterprise. He was previously a senior technology editor for Network Computing and an independent consultant in central New York. Write to him at [email protected].MethodologyWe deployed the sensors on our Real-World Labs® network behind our SonicWall 3060 in Syracuse, N.Y., and Green Bay, Wis. In addition, we deployed SourceFire's IDS 3000 alongside our Snort 2.3.2 installation, both of which did intrusion detection off all three networks in Syracuse. We used in-line taps on each network to pull traffic off the wire and dump it onto a Hewlett-Packard switch. We then mirrored the traffic to a hub so that all devices could see the same traffic at the same time. Because we monitor multiple networks simultaneously, we used the HP switch to buffer packets that would cause collisions.

Syracuse networkClick to Enlarge

Our network is made up of 150 to 200 production hosts running a mix of Windows 2000, 2003 and XP, and various distributions of FreeBSD, Linux and Sun Solaris. In addition, we have many appliances running custom OSs. In short, we have variety (see diagram, below). We initially ran the products without attempting any exploits to determine what they would detect and what comprised normal traffic. We matched up OS detection and service identification, then compared the vulnerabilities the products discovered with the vulnerabilities present by checking applied patch levels and, where possible, active exploitation using Core Security Impact 5.0.

We further tested protocol detection by placing common services, such as telnet, SSH and HTTP, on nonstandard high ports. We placed a telnet daemon on Port 25, which is commonly used for SMTP, and an SSH deamon on Port 443, commonly used for IMAP. We then initiated several connections to the host and port pairs to gauge whether the products properly identified the protocol and how long that took. None of these detections were in real time.

We didn't score the capability of the management products to correlate IDS events with passive detection, because doing so would have gone beyond the bounds of the review. We conducted some preliminary tests, however, and weren't impressed by either product. We used the open-source Nessus scanner to scan our hosts, and Core Security Technologies Impact 5.0 to attack vulnerable hosts. That way, we were assured of firing off IDS events.All Secure Enterprise product reviews are conducted by current or former IT professionals in our Real-World Labs® or partner labs, according to our own test criteria. Vendor involvement is limited to assistance in configuration and troubleshooting. Secure Enterprise schedules reviews based solely on our editorial judgment of reader needs, and we conduct tests and publish results without vendor influence.

Is Passive VA An Accurate Filter?

There are limits to what passive analysis can achieve in products. Using passive OS detection to filter down IDS events may help to lessen the noise, but a misclassification of an OS--or, more commonly, too broad a classification--can interfere with event analysis. Knowing that a host is Linux- or Windows-based cuts out many false positives, but even among Windows 2000 Server Service Packs 0-4, 2003 and XP, there are diverse vulnerabilities. Patch levels can't always be determined through passive analysis, so an administrator still must do lots of processing by hand.

We found that Defense Center's and Lightning's correlation of passive analysis and IDS events resulted in different data points. After attacking our network, we managed to gain control of a handful of Windows and Sun Solaris computers. However, many attacks failed, mainly because the target had been patched. With the various Microsoft RPC exploits, Defense Center flagged both successful and unsuccessful attacks as high impact; some of those were false positives. However, the 111 correlated high-impact attacks were far fewer than the 50,000 or more IDS events collected. When we searched for the second highest impact, the number of events reached 5,430, many of which were false positives. Lightning correlated fewer attacks with vulnerabilities, largely because fewer protocols are decoded--for example, Microsoft's SMB (Server Message Block) protocol--but it showed us other events, such as DNS zone transfers and Terminal Sessions. The difference lies with Nevo's protocol detection.

This wasn't an exhaustive test, but we were testing production networks and exploiting a vulnerable system from the inside. The relatively high number of false positive and negative reports from both products gives us pause. Our recommendation is to thoroughly test the passive analysis and correlation before diving in.

R E V I E W

Passive VA Scanners



Sorry,
your browser
is not Java
enabled



Welcome to NETWORK COMPUTING's Interactive Report Card, v2. To launch it, click on the Interactive Report Card ® icon above. The program components take a few moments to load.

Once launched, enter your own product feature weights and click the Recalc button. The Interactive Report Card ® will re-sort (and re-grade!) the products based on the new category weights you entered.

Click here for more information about our Interactive Report Card ®.


SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights