Network Security Shrinks to Zero-Trust

The traditional network security perimeter has become smaller and smaller as attackers become more sophisticated. But how practical is the zero-trust model?

Daniel Conde

August 9, 2018

3 Min Read
Network Computing logo

Networking and security have always been closely intertwined, with well-known tools and techniques such as network firewalls, analytics, DDoS protection. However, network security is rapidly changing to keep up with evolving threats. In this blog, I'll take a look at the broader trends reshaping how companies protect their assets from intruders.

We’ve all heard about how the perimeter is dead -- how it no longer makes sense to use a firewall to create a moat around a data center castle to protect its assets. Secured assets are deployed widely on-premises, in remote offices, and in the cloud, so we no longer can just protect one area. Consequently, the perimeter needs to become smaller in order to protect assets that are more distributed than ever before.

Attacks are no longer just perpetrated from the outside-in, but may be lateral so that we ought to assume some network element has been breached and infiltrators can hop between systems. This led to the idea that security protection should be put as close as possible to the items being protected. Vendors have pushed concepts such as micro-segmentation to protect against these attacks by creating small perimeters to secure enclaves.

In other words, put a guard dog next to your prized possession, instead of far out by a fence. Thus, the trend is to shrink the perimeter and make many perimeters.



But how far can this line of reasoning go? Perhaps you can’t even trust other programs in the same server your program runs in. This led to the idea of zero-trust security: that you cannot trust anything outside or even inside a perimeter. The term was coined around 2010 by a Forrester analyst. Since then, a slew of vendors claim to offer zero-trust security, which unfortunately waters down the concept. Some also use the terms software-defined perimeter or BeyondCorp, the latter in reference to the zero-trust model Google uses in its data centers. Some zero-trust products require changes to the application-development process to improve security. Others work with existing infrastructure apps and endpoints, but with limitations.

Networking pros need to take a pragmatic approach to these new network security models. I think the zero-trust concept can be an eventual goal, but it may not be easy to attain in the short term. While it's possible to re-architect an application to meet some zero-trust definitions, it's not practical in many cases.

A more realistic approach is to take existing infrastructure into account and create the smallest practical perimeter around sensitive assets. Whether the end-point laptop, a server, or even an application, it makes sense to minimize open connections to untrusted systems on a flat network. Traditional features like VLANs can be used initially to create small perimeters, even if they are cumbersome to maintain. Then you might implement advanced network segmentation or zero-trust techniques.  

For example, you could convert VLAN configurations into micro-segments. Those segments can correspond to applications, rather than coarse grouping such as a company departments. Similar approaches may be used for zero-trust. In both cases, security and business considerations come before technology choices.

We should also assume that protections may not always work. We need a way to turn off systems once we detect contamination. This includes closing network segments, shutting down applications, or redirecting traffic.

If a zero-trust model is the goal, networking pros should consider a practical step-by-step approach to achieve it. Waiting for a perfect solution may not work, as that day may never arrive.

About the Author(s)

Daniel Conde

IT Analyst and ConsulantDan Conde is an IT analyst and consultant. He formerly was an analyst at Enterprise Strategy Group covering enterprise networking technologies including software-defined networking, network virtualization, data center and campus networking, WAN optimization, and network performance management. His experience in product management, marketing, professional services and software development provide a broad view into the needs of vendors and end-users. Prior to joining ESG, Dan was director of products at Midokura, where he was responsible for product management & marketing for the firm's OpenStack-based network virtualization product. Prior to that, Dan enjoyed successful product management positions at vendors like VMware, Rendition Networks, NetIQ, and Microsoft. Dan is an alumni of the University of California, Berkeley, where he received a BA in Computer Science and an MBA from the Haas School of Business.

Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like

More Insights