Network Security Analysis: A New Approach
Network security analysis has not traditionally been a team sport; to their own detriment, security decision makers rarely collaborate on data analysis with peers at other organizations. This can be due to a variety of reasons: fear of sharing sensitive information, understaffing, a lack of training, or a lack of tooling.
The resulting go-it-alone approach has significant implications, as network security analysis frequently requires wide-scale human inspection and analysis of anomalous traffic. Since most organizations use custom software, or custom variants of off-the-shelf software, to look for threats, observations must be manually compared with reports from other organizations. The result is a great deal of delay as members of different organizations wait for their colleagues to get an email, find time to read it, slip investigation into their own duties, and then – hopefully – respond.
This slow pace of network traffic analysis for security threats places organizations on the defensive when it comes to keeping up with modern attack vectors: the new generation of botnet and DDoS tools, the increased presence of untrustworthy personal devices on corporate networks, and the limited security provided by IoT devices.
We need to automate the constant analysis of traffic patterns and technology that can perform the time- and labor-intensive process of comparing the results to determine an appropriate response. One emerging approach uses collaborative, peer-to-peer (P2P) infrastructure to automatically detect traffic anomalies and enable that information to be shared. As a result, cloud providers, ISPs, VoIP companies, and universities can exchange information and analyze traffic together, at high traffic volumes, strengthening the IT infrastructure through cyber information sharing and analysis.
A P2P, collaborative analysis approach presents several advantages over current techniques:
Network and system administrators spend an unacceptable amount of their time looking at traffic logs and traffic centers. They might spot an anomaly, reach out to peers and then over a period of weeks or even months either conclude that it was a legitimate threat (at which point it is potentially too late) or that it was just a random and relatively harmless program an employee is running.
All of this manual time and investigation is not conducive to rapid and accurate threat analysis. With a peer-to-peer approach, data traffic summaries and suspicious traffic can be flagged by one administrator and then compared to traffic at other sites flagged by other admins. This comparison can be used to pinpoint any similarities between the flagged traffic; for example, it can be used to discover shared command-and-control networks or to unmask spoofed addresses used as part of an attack.
As knowledge about the traffic is gained, it too can be shared with the peer-to-peer network, allowing the shared knowledge set to grow until there's enough information to allow admins to confidently ignore or mitigate the anomaly.
Improved detection, reduced damage
It is not a stretch for enterprises to recognize that a collaborative approach is mutually beneficial. While some are hesitant to assume the risks associated with sharing proprietary information from their networks with peers, they increasingly recognize that conventional, siloed cyber defense systems are proving ineffective.
Ransomware attacks are a prime example. The Texas Hospital Association noted earlier this year that response and recovery from ransomware attacks are the biggest cybersecurity issue for hospitals.
Trying to identify and address ransomware or any malware threat in a silo can be a challenge for hospitals of any size, particularly for those that lack significant security infrastructure and an incident response team on operating 24/7. For this reason, hospitals offer a compelling use case for a P2P approach in which like-minded organizations facing a similar and growing challenge can share network traffic data for better visibility into what a ransomware attack might look like.
Information about traffic, including ransomware, can be passed around different hospital sites automatically. If ransomware occurs at one site, information about the network traffic that led up to the infection can be shared with others. If they recognize that pattern in their own traffic, ransomware can be detected and mitigated before it has an effect.
Detecting these attacks early can be the key to reducing the damage, as well. The delay between attack onset and attack mitigation is critical with many types of attacks, such as DDoS, in which the additional time provides positive feedback to the attacker, who will continue to send more and more traffic at the target network. Allowing multiple defenders to coordinate their response can result in earlier detection, faster DDoS mitigation, and a dramatic decrease in downtown and collateral damage.
Identify attacks that isolated analysis can't see
The ability to automatically detect traffic anomalies and share the information using a distributed, P2P infrastructure can also provide a “big picture analysis” that is not possible at individual sites. Patterns of attack across networks can provide information about the goals of the attacker, their motives for the attack, and predict future behavior. At a lower level, traffic sensors at several locations across the internet can unmask spoofed traffic attacks, and thus to provide a more targeted mechanism to defeat them.
Collaborating network systems can also operate at both the infrastructure layer — core ISPs and network backbone providers — as well as the network edge so that operators understand patterns across their entire user base. For example, an edge ISP can more quickly discover the signatures of compromised home IoT equipment and throttle traffic at the home router in order to provide a more consistent experience for their users.
In the end, administrators have a need to both perform regular inspection of network traffic and to work with other administrators to help understand that traffic. Unfortunately, few administrators have the time, energy, or training to do this analysis all on their own. Automated traffic analysis tools can help with this problem, but their capabilities are vastly improved through the use of P2P mechanisms.
Such mechanisms are very resilient to interference, and do not require central administration. Groups can self-organize according to their need and share information as they feel comfortable. When deployed, the result is a reduction in attack detection time, a decrease in the amount of administrator effort to identify an appropriate response, and more effective mitigation of network threats.
Adam Wick is the Research Lead for Mobile Security & Systems Software at Galois, where he leads projects in mobile device security, secure operating system design and implementation, virtual private networks, corporate network defense, and wireless systems. He has been working as a staff engineer, senior project manager, and research lead at Galois since 2006.
Recommended For You
In honor of St. Patrick’s Day, there’s no better time to reflect on those instants when life threw us a curveball, but we were able to hit a home run.
The success of modern enterprises, especially those utilizing real-time communications solutions, is highly reliant on IT infrastructure availability.
To understand the critical role of HTTP/2 in streamlining operations, we must look back at the technologies and implementation gaps that got us where we are today.
A video overview and best practices on how to reduce broadcasts and find other things to tune.
This is a great example of the perfect storm of variables coming together to cause performance issues. Watch the video to see how the problem was found.
Providers should be making infrastructure work for everyone in 2019, improving efficiency and opening up networks for all apps on their infrastructure.