Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Security Visibility In The Virtualized Data Center

Security professionals crave data. Application logs, packet captures, audit trails; we’re vampiric in our need for information about what’s occurring in our organizations. Seeking omniscience, we believe that more data will help us detect intrusions, feeding the inner control freak that still believes prevention is within our grasp. We want to see it all, but the ugly reality is that most of us fight the feeling that we’re flying blind.

In the past, we begged for SPAN ports from the network team, frustrated with packet loss. Then we bought expensive security appliances that used prevention techniques and promised line-rate performance, but were often disappointed when they didn’t “fail open,” creating additional points of failure and impacting our credibility with infrastructure teams.

So we upgraded to taps and network packet brokers, hoping this would offer increased flexibility and insight for our security tools while easing fears of unplanned outages. We even created full network visibility layers in the infrastructure, thinking we were finally ahead of the game.

Then we came face-to-face with the nemesis of visibility: virtualization. It became clear that security architecture would need to evolve in order to keep up.

Virtualization technologies offer massive benefits for IT infrastructures through the use of generic compute, automation, and self-service. However, they can complicate the practice of information security, especially since many security vendors still seem to be playing catch up with the pace of innovation in virtualization technologies.

While the introduction of routing and encapsulation protocols in software-defined networking (SDN) promises improved efficiency through the concept of the software-defined data center (SDDC), it can cripple security architectures dependent on legacy tools located on physical servers or appliances outside of the virtualized environment. Many organizations have invested heavily in these security tools and without the understanding and addition of appropriate visibility components, SDN can hamper the ability of security professionals to monitor and protect critical data.

But even with all of the hoopla surrounding SDN, the typical virtualized architecture still consists of a set of clustered hypervisors often managed as one ecosystem. Virtual switching exists within each hypervisor or there’s distributed virtual switching with a centralized management and control plane internal to the environment. While this presents challenges for security teams less familiar with virtualized network components, east-west traffic can still be segmented to ensure it moves through physical equipment like a top-of-rack (ToR), aggregation or core L3 switch, which provides ample opportunity for inspection by physical security tools.

Yet as IT organizations increasingly look to advanced virtualization technologies like SDDC, VSAN and SDN to deliver faster, cheaper solutions for the business, security concerns increase. The newer architectures would keep most east-west traffic internal to the virtual data center through the use of routing and encapsulation protocols such as VXLAN. The physical network becomes pure utility transit, with a reduced ability to extract information using existing security controls outside the virtualized environment. It becomes critical to identify opportunities for visibility using native capabilities of the hypervisor or with virtualization add-ons.

Instead of resisting the virtualization shift, security professionals should embrace the benefits offered by the SDDC, which outweigh issues of complexity in delivering visibility. A virtualized data center offers more possibilities for contextual security. Policies and standards can be embedded into the environment through orchestration workflows and self-service templates.

For systems that were previously difficult to identify and segregate by data classification, application-aware tools specific to virtualization can assist in categorizing virtual machines, placing them in zones and enforcing access control based on policies that follow the system or application as it’s moved within the SDDC.

However, even in the most virtualization-aggressive IT organizations, hybrid architectures will be the reality due to budget constraints and the struggle to virtualize legacy systems. Physical security and visibility tools will likely persist during this evolution, complicating the work of security teams struggling to find a cohesive insight into events. Centrally managed visibility layers, which can interoperate with inspection tools in both the virtual and physical environments, are critical in providing the event data necessary for a unified view of the entire infrastructure.

Luckily, many of the latest tools are capable of using a hypervisor network API in order to inspect virtual machine traffic. Some products, such as those by Ixia (Net Optics) and Gigamon, can even tunnel monitored traffic back to a centralized device in order to merge inspection of the physical and virtual environments. Where not supported, configuring a SPAN of the distributed switch and/or agents installed on guests and hosts also can be used for inspection and enforcement.

Ultimately, organizations should use a phased approach when approaching the creation of a visibility architecture that aligns with a virtualization road map. The design should be based on a prototype of validated visibility tools and security controls. This strategy will inform the data-driven methodology for migrating to newer tools, which better support virtualization.

Additionally, a risk assessment of the current virtualized environment will capture any regulatory and/or compliance requirements, and help drive the timeline for deploying the proper controls. The transition should be planned carefully; otherwise the result could be an infrastructure littered with an assortment of devices and one-off security tools, confusing to manage for already resource-constrained security teams.