Attack tools are getting more user friendly, more automated. This past February's DDoS attacks proved the power of automated destruction. So what do the good guys have stacked in their favor? Tools that automate the vulnerability discovery process. Tools that help you secure your systems. Tools that give the administrator the edge, or so the marketing speak goes.
Over the course of a month, in our Chicago labs, we set out to test vulnerability scanners--tools that were designed to remotely assess your network by finding the vulnerabilities on your systems before the bad guys do. These solutions constitute just one of the many lines of defense your company should deploy.
We decided to entrust the security of our test network to Axent Technologies' NetRecon, BindView Corp.'s HackerShield, eEye Digital Security's Retina, Internet Security Systems' Internet Scanner, Network Associates' CyberCop Scanner, and two open-source products: Nessus Security Scanner and Security Administrator's Research Assistant (SARA). One product, World Wide Digital Security's System Analyst Integrated Network Tool (SAINT), is open source, with a commercial reporting tool. We initially looked at the WebTrends Security Analyzer, but after discovering the product requires host-based agents to perform thorough scans, we removed it from the review because it differs too greatly from the other products.
In an effort to cut through the hype, we took five OS platforms (Hewlett-Packard Co. HP-UX, Microsoft Windows NT, Novell NetWare, Red Hat Linux and Sun Microsystems Solaris), each with specific published vulnerabilities (see "How We Tested"), and set the scanners loose. Considering they were known vulnerabilities (several of which have been around for quite some time!), this test should have been a piece of cake. The scanners should have been able to identify, classify and report on all 17 vulnerabilities. Unfortunately, the results weren't so sweet.
Borrowing eEye's tag line: Is the "vulnerability over"? Based on our testing of these tools, we'd say no -- not by a long shot. We set up 17 of the most common and critical vulnerabilities out there, and not one product detected them all (see "Vulnerability Scanners: Detection Results"). The closest was the Nessus Security Scanner, which nailed 15 of the 17. But even one hole is too many. Because all the products failed to identify key vulnerabilities, none of them received our Editor's Choice award.
It got worse from there. In addition to missing a number of major holes, some of the tools presented us with confusing reports, contradictory information and misdiagnosed vulnerabilities. If nothing else, our testing should serve as a wake-up call: If your organization is relying on these scanners to identify all your critical system vulnerabilities, it's time to rethink and regroup. Worse, if you are a consulting firm basing your assessment services on these products, you better have some system in place to cover for their shortcomings, as these products don't cut it.
However, products like Internet Scanner, Nessus Security Scanner and NetRecon are still quite powerful and will make worthy additions to your organization's security tool belt, especially if you don't have any efforts in-house right now. In the end, we didn't favor the scanner with the thickest report, the sharpest interface or the highest number of vulnerability checks. We liked best the two that did a decent job at their fundamental purpose: to find known security vulnerabilities. The two that shined the brightest on this front were ISS' Internet Scanner and Nessus Security Scanner. Unfortunately, it's a case of the best of the worst.
The Numbers Game
Sheer number counts can be misleading in many ways. For example, the intrusion-detection vendors play the numbers game when it comes to the number of attack signatures their products employ. Virus scanners sport numbers to back their "efficiency" at detecting hostile code. The vulnerability-scanner space isn't any different, as the marketing pitches tout the raw number of "vulnerability checks" these products have built-in. One would think more is better, but based on our testing, we beg to differ. We would much rather use a product that accurately identifies and reports on critical vulnerabilities than use one that is inaccurate and has a billion checks that may or may not work. Consider that CyberCop Scanner, which boasts 730 checks, found 12 out of 17 vulnerabilities, while Nessus Security Scanner, claiming only 550 checks, found 15 out of 17. While this is admittedly a small sample size, for the sake of our network's security, we would still go with the latter.
Unfortunately, the numbers game doesn't end there. The truth is that we don't know exactly what a vendor is classifying as an individual "check," let alone how useful that particular check is in the grand scheme of things. For example, Axent's NetRecon alerted us in the report that "portmap might be running on a high port," which was immediately followed by the "portmap is running on a high port" alert. Is that one vulnerability check or two? Mitre Corp.'s CVE (Common Vulnerability and Exposures) project is attempting to bring some method to the madness by enumerating and classifying known vulnerabilities. This could help bring some objectivity into the picture, but the project is still in its infancy, and vendors have only begun to adopt it.
Another misleading number is the quantity of items found by a scanner (which is directly proportional to the final page count of the report). While consultants will agree there is often an inherent credibility to a thick report, a report that is overly verbose but still misses major security holes is a problem. Perhaps the day will come when the differentiating factors among these products are left only to the quality of the report, the interface or the rates at which these products can scan large networks. Today, however, only one clear delineator levels the playing field: accuracy in detecting major vulnerabilities.
If we were to build our ideal vulnerability scanner, it would comprise a few fundamental components. First and foremost, it would have an up-to-date database of vulnerability checks. While organizations should have methods in place to monitor new vulnerability announcements, these products should not be months behind on looking for big holes. Second, the scanner would have to be pretty accurate and limited in its susceptibility to flagging false positives. Hunting down a few phantom alerts in a small report is one thing; hunting down hundreds or even thousands of them after a large scan is quite another. On our tiny test LAN, the false reports were annoying. If the ratios we saw hold true on larger networks, this will be a much bigger problem in enterprise environments.
The ideal scanner would have some sort of scalable back end that can store multiple scans and provide some means of performing trend analysis. While products like Internet Scanner let you pull up past scans for comparison, products like eEye's Retina don't appear to have any mechanism for managing multiple scan sets. Finally, the ideal scanner would contain clear and concise information for fixing any discovered problems. Products like Axent's NetRecon are fairly polished on this end, and Internet Scanner has practically perfected it, but products like SAINT and SARA are severely lacking in providing specific instructions on repairing the identified problems.
While every organization should have some mechanism to both check and validate system-level security controls, the shortcomings of these products reinforce the importance of strong OS lockdown procedures. In the end, however, we'll choose an aesthetically displeasing report enumerating all our vulnerabilities over a state-of-the-art, virtual reality, network topology fly-through, a multicolored set of graphs, or any other iconic representations of most of our vulnerabilities. In system security, access to accurate data is what cuts it in the real world. (Check out our online sidebar for a list of the actual reports.)