Over the past year or so, a series of articles discussed the flaws in many categories of security products. Articles from Bruce Schneier and SANS proclaimed the utter failure of antivirus and host-based intrusion detection systems. Then there was the study from Imperva attacking the effectiveness of AV, which became a cause celebre among security vendors.
Even with intrusion detection systems, the signature model seems to be a losing battle, with armies of analysts toiling away to keep up with the bad guys while we wait for promises of big data to save the day. IDS is a loud, cranky thing, constantly crying for attention, like a newborn infant. It's hard to find demonstrable ROI with the level of effort required for implementation and maintenance.
However, despite all the problems with IDS, anti-virus and other security products, they still help you hit an audit checkbox. It's part of the compliance game, where everyone pretends that meeting a set of pre-defined requirements is an effective way to address risk. It's not, but it provides a mechanism for everyone to say, "We're doing something about security, and here's the checkmark to prove it."
Unfortunately, the ability to identify and address actual risks is getting harder. It's not just trying to find a needle in a haystack. It's like trying to find a needle in a million haystacks. It's the Black Swan event discussed by Nassim Taleb, but thousands of them.
So we throw more computational power at the problem, but all we end up with is more data than we can manage with discussions about how to analyze said data in a way that will actually help us predict attacks in real time. The problem here isn't the signal-to-noise-ratio of false positives, but the false negatives that insidiously degrade the security of our organizations.
Maybe the problem isn't that we don't collect enough data, but that we aren't discerning enough regarding the data we amass. Gerd Gigerenzer, an expert in the field of smart heuristics and bounded rationality, offers a different perspective.
With studies in the fields of medicine and economics, he validates the effectiveness of "fast and frugal" decision trees for making more accurate decisions, demonstrating that by using less information we can make better predictions.
But what if the source of difficulty is bigger than how we use data in the security realm? Maybe the real issue is the challenge of automation in general. In Lisanne Banbridge's seminal work, "Ironies of Automation," she discusses this critical problem. We automate in order to eliminate human error, but in doing so we actually expand those errors. The root cause is that a human created the automation system. I wonder what Captain Kirk would do with this Kobayashi Maru, this spectacular no-win situation?
The truly mystifying aspect of this dilemma is that the industry seems to keep getting away with selling us empty promises. Solutions that simply don't work in solving the Gordian Security Knot facing enterprises every day. And we let them. If a networking vendor sold a device that should forward packets, but didn't, it wouldn't have very many customers.
So why do we continue to buy security snake oil? Probably the same reason people still flock to magicians. We really want to trust that these products will make the bad guys disappear. We want to believe that we can eliminate risk, that we will ultimately locate and stop the source of an attack, like finding the pea in a shell game. Unfortunately, the game is rigged, and in most cases, we'll never find the pea.
[Is it really possible for security and operational efficiency to coexist? Michele Chubirka dives into this thorny topic in her Interop workshop "Beware the Firewall, My Son! The Jaws that Bite, the Claws that Catch!" Register today!]