Data Center Security Requires Paranoia
A whitelist model is a key strategy in today's threat environment.
July 25, 2016
Imagine you are walking home very late at night. In the distance, you can hear rough laughter and the sound of breaking glass. You walk a little faster, your heart rate elevated, until you can put your key into the lock at your front door, let yourself in, and then lock the door behind you. You breathe a sigh of relief and head off to bed feeling safe and trusted inside your own home.
For decades, companies and government institutions have stored the “crown jewels” of their computing assets in a trusted environment, the data center. Now data centers -- and by extension, cloud computing facilities like Amazon Web Services and Microsoft Azure -- are no longer perceived as trusted environments. Insider threats, malware, and melting perimeters mean the network perimeter between the data center and the Internet is no longer the DMZ.
Despite spending more than $10 billion a year on network perimeter security, IT strategists must account for what Forrester calls the Zero Trust Model. Or to borrow from the late, great former CEO of Intel, Andy Grove, organizations need to be paranoid to deal with today's data center and cloud security challenges.
We also must rethink the role of networks in security. Yes, a secure network is a critical component of any security strategy, but it is not the only foundation. Indeed, there is an interesting paradox between the roles of network and security. The TCP/IP protocol as sponsored by DARPA was designed to survive a nuclear war, hence finding a route when others were not available was its critical attribute. IP networking is about "can": how can I route a packet from one place to the next.
Security, however, focused on "should": should I allow an individual or server to connect to another computing resource. It is statement of permission, not connection.
To allow networks to do what they do best -- route packets -- we need to follow some new principles of data-center security. Network segmentation is useful, but extremely limited from a security perspective because it is coarse-grained. It does not easily support the more precise segmentation strategies that are increasingly required.
Here are four principles that must be considered in any data center or cloud security strategy.
Paranoia is not only healthy, it’s necessary. Today’s firewalling technology works on a blacklist model, which means basically everything can communicate unless it is blocked; think about it as a router with a bump in the wire. This forces companies to maintain large lists of blocked IPs that grow over time, and breeds complexity and inertia into security operations. A more paranoid approach, a whitelist model, assumes everything is blocked until it's allowed, reducing the attack surface of endpoint communications to the precious few allowed flows. Whitelisting also reduces the amount of allowed communications that must be maintained since it means everything is no, until it is yes (should vs. could) and reduces the need to maintain large, legacy blocks of blacklist IPs.
Segmentation is more than microsegmentation. For the past few years, the industry has started to adopt segmentation strategies beyond the VLAN/subnet/zone structures provided by networks. More segmentation means less lateral spread of unauthorized communications. However, the term microsegmentation can be confusing as there is a range of approaches that can be considered, from simply segmenting environments – such as separating development from production -- to process-level segmentation between servers (nanosegmentation).
Illiumio image.png
Tying security to infrastructure upgrades isn't feasible. For the past 20 years, segmentation, and much of the data center security stack, has been part of the network (switches, routers, firewalls, IDS devices). In the past five years, others such as hypervisors suppliers have stepped in to provide security control points. Making the infrastructure more secure most certainly has benefits, but it also has three negative consequences:
If you do not own the infrastructure, you may lack control over its security;
If you are tied to the infrastructure, you must update/upgrade your infrastructure to improve your security; and
As computing evolves from bare-metal servers and VMs to containers and server-less computing, control points like the hypervisor either do not encompass security requirements or require complicated constructs like gateways.
If software is eating the world, it must also eat security.
If everything is dynamic, my security must be too. The new computing architectures are driving more heterogeneous, hybrid, dynamic, and distributed computing environments. This requires the introduction of security systems that can deal with rapid and regular change in application and infrastructure environments -- think about the short shelf life of a Docker container. Security must travel with the applications and deal with any changes in real time.
In designing data center and cloud computing security, we must invert the dynamic where attackers only need to be right once, but IT and security teams need to be perfect all the time. We need to create approaches that deal with a more dynamic world where computing runs over a broad array of networks. And we must assume every interaction between computing systems must be assumed to be untrusted, unless trust is provided. Indeed, perhaps only the paranoid will survive.
Alan Cohen is the Chief Commercial Officer and a board member at Illumio. Alan’s prior two companies, Airespace (acquired by Cisco) and Nicira (acquired by VMware), were the market leaders in centralized WLANs and network virtualization, respectively. He also is an advisor to several companies, including Netskope, Highfive, and Vera.
You May Also Like