Generating metrics that rate a company's ability to excise known vulnerabilities from its systems is just one of the enticing applications of the recently announced Project Sonar, a crowdsourced initiative that aims to crunch continuous scans of the Internet to catalog every known vulnerability running on every system. This big data project is the brainchild of HD Moore, chief research officer at Rapid7 and creator of the Metasploit open source vulnerability testing tool.
One impetus for Project Sonar is to help companies secure their systems better. Another is to hold vendors accountable for patching software and firmware rapidly, and ensuring that those patches propagate. "We are hoping that through regular updates and the publication of significant findings we can keep negligent vendors in the hot seat," Moore told me via email. "In the past, we found that calling out the market share and specific risk -- healthcare, utility, etc. -- motivated vendors, industry groups and users of the affected products to solve the problem or find alternatives. At the least, even if there is little response from the vendors, the risk will be more transparent as a result of this work."
[ A data-driven society may have its advantages, but there's a dark side. Read Why I Don't Want To Live In Dataland. ]
Robert David Graham, CEO of Errata Security and creator of the Masscan tool, which Project Sonar includes in its Internet-scanning arsenal, lauds the initiative. "I think it is a great idea," he told me via email. "Security is about metrics. One of the biggest metrics is 'Where do I stand relative to my peers?' With this sort of project, people can finally see their peers -- and hence, where they stand relative to them."
Graham also likes the sheer scale of the effort. "I thought of continuous scanning, but for only a few ports. I didn't envision something like this," he said.
Already, partial scans of the Internet hint at what's possible. Earlier this month, for example, Graham used Masscan to count the number of Internet-accessible D-Link routers sporting a recently discovered bug. "I ran two scans of the Internet on port 80, one with the user-agent of 'masscan/1.0,' and the other with the offending user-agent of 'xmlset_roodkcableoj28840ybtide', in order to see the difference," Graham said in a blog post. While the results captured from the eight-hour scan of the entire Internet wouldn't find every vulnerable device, numerous ones turned up. "Out of 2,139 total vulnerable systems, 1,510 are located at Slovak Telecom," Graham said. "That's a typical pattern: Smaller countries are dominated by only a few ISPs who tend to roll out a standard configuration DSL modem to their customers with a vulnerability."
Graham said he's still working through captured data -- including information gathered by a subsequent scan of port 8080 for vulnerable D-Link devices -- and that it's taking longer than he expected. And that point gets to one of Project Sonar's main challenges: the huge quantity of data that researchers are attempting to automatically gather, process and then disseminate.
One related challenge remains people power: Who's going to run these scans, refine related tools which crunch that bulk data for known vulnerabilities and handle related notifications for vendors and service providers? "Over the last two years we have worked with external researchers and various CERT teams to classify and report on vulnerabilities identified by large-scale scans," Project Sonar creator Moore said. "The results have been a bit mixed -- while collaborating with a small group of folks has its benefits, we barely made a dent in the overall queue."
But Moore hopes that by previewing what's possible with Project Sonar, more security researchers will join in -- for example, to write better scripts and review the dataset for interesting new findings. Businesses likewise stand to gain from studying the dataset or related results.
"For example, we have published a script that identifies all assets owned by a particular company, using the SSL common names as a match specifier," Moore said. "The datasets can also be used to identify all instances of a vulnerable product, clarifying the risk for specific organizations -- as well as the Internet as a whole."
While the project relies on automated tools, they don't yet spot every type of known vulnerability. "Some classes of vulnerabilities are easy to identify today," Moore said. "These include outdated software versions, the mere existence of certain types of probe responses -- snmp, wdbrpc, etc. -- and instances where something that is supposed to be random -- SSL public keys -- are duplicated between unrelated organizations."
Moore's hoping that more security researchers will join in and write tools to spot new types of vulnerabilities. "One goal of this project is to identify new categories of vulnerabilities and new ways to automatically flag vulnerable systems from the dataset," he said. "In some cases, additional types of data still need to be gathered."
Already, the project offers information security managers the prospect of harvesting scans of their networks to prioritize their remediation efforts so that they can kill the worst or most prevalent bugs first. CIOs could also hold negligent vendors accountable for not providing more timely patches.
The biggest takeaway, however, is public ratings of every company's information security. What if a CISO's or even CEO's compensation package was tied in some way to those ratings? Just imagine how quickly corporate information security would improve.