Market Analysis: Network Node Validation

Vendors are rolling out smarter network access-control models with node validation technology. We provide an overview as well as a few operational concerns to consider.

November 4, 2005

18 Min Read
Network Computing logo

 

 

We set out to analyze these next-generation network access control models. Many include the concept of network node validation, or NNV--that is, identifying and verifying nodes attempting to join the environment. We wanted to know which strategies help with which problems, and what organizations should expect from this intelligent infrastructure. In this article we examine competing vendor strategies and discuss operational concerns organizations should watch for. In "Catching Rogue Nodes", we take an early look at next-generation technology from Cisco Systems, Juniper Networks and start-up ConSentry Networks. We also examine how multivendor implementations from companies like Sygate (now Symantec) approach the problem.

We found that all initiatives are not created equal--some components are baked, while others must go back in the oven. In addition, these technologies will cause non-technical changes as well as technical ones.

Unraveling Node ValidationMost organizations associate NAC (Network Admission Control) with Cisco's network-defense initiative. However, as the need for better network control grows, other vendors are joining the fray. Legacy infrastructure giants Enterasys Networks, Extreme Networks and Foundry Networks are all incorporating greater security functionality into their product lines; some of those features resemble Cisco's NAC in functionality, others in name only.

Juniper has produced some innovative alternatives to Cisco's approach, for example, using a combination of new devices and repurposed SSL VPN technology. And newcomers ConSentry, Lockdown Networks, Nevis Networks and Vernier Networks are all poised to introduce NAC-like standalone products that provide pieces of the puzzle, in competing ways.

What, exactly, is node validation? It depends on whom you ask. The approaches taken by Cisco and Juniper bring together two concepts that have rarely met before: identity and system posture. By gathering information on patch levels, software versions, running processes, and specific OS- and application-level information, for instance, we can make assumptions about a system's relative health (see "Client-Side Security: Still Shaky", for some caveats about trusting agent models). Combine this information with a method of authentication (identity), and we have tangible information on which to base network-admission decisions.

NETWORK ACCESS CONTROL
Immersion Center

NEWS | REVIEWS | BLOGS | FORUMS TUTORIALS | STRATEGY | MORE

Other vendors' strategies are predicated on incorporating basic authentication and integrating technology that looks suspiciously like signature- and traffic-anomaly detection. We're not convinced these approaches are even evolutionary, much less revolutionary. In general, though, legacy traffic-inspection techniques are best left to security vendors.

Authentication is nothing new either; take the 802.1x specification (see "802.1x for Dummies"). However, it's important to note that identifying who is wholly different from identifying the state of his or her system. Posture gives us some visibility into system health, or state. Examples of posture include whether the machine's version of IE is patched for a recent vulnerability, information on antivirus and firewall processes and versions, and other metrics with which to judge the health of a PC.So we have two pieces of the puzzle: identity and posture. Assuming that some network device ultimately permits or denies network access for a system, access control must also be part of our equation. The final piece that NNV provides is authorization--letting the network factor in multiple data points and consult a set of policies defined by the organization to determine the suitability of granting network access. It is this "cross model" functionality that led us to acknowledge NNV--or at least the concept of NNV--as something more than simple access control. Cisco calls it NAC--for Network Admission Control. Juniper deems it Secured and Assured Networking, and Symantec calls it NAC but means "network-access control." Whatever the name, it's important to note that these initiatives are more than just another access-control trick. Vendors are factoring critical new components into the network provisioning and permission equation.

The need for this technology is apparent. Take the example of a mobile worker returning to the office, his laptop infected with the latest round of contaminant fun. The employee plugs his laptop into the corporate network. The worm, happy to be on a 100-Mbps connection, proceeds to eat the organization from the inside out.

Now, imagine if, before that laptop gained full access to the network, it was identified as being in a questionable state of health and in need of an antivirus update, a cleanup tool or attention by the helpdesk. Follow-up action might go beyond just keeping that node off the main production network. You might choose to place it on a restricted VLAN or have the infrastructure generate a call for human intervention. Technology that is flexible in its authorization process, such as Cisco's NAC or Juniper's Infranet products, allows for this elasticity.

Another goal might be to restrict contractor or supplier access. Many organizations must let nonemployees work in their offices. This scenario has all the components of the dirty laptop, with the added challenge of the a contractor attaching to remote networks that local admins don't have full control over. Should Craig the Contractor and his PC have network-level access to sensitive systems? Is there any reason for his PC--clean or not--to communicate with database servers that contain customer information? If Craig is there only to support printers in Budapest, is there any reason his PC should be communicating with the data center in Stuttgart?NNV might start off as nothing more than a mapping exercise without enforcement, or a switch-level VLAN assignment. But some products, including Juniper's Infranet controller and enforcer, are geared for creating dynamic, firewall-like enforcement rules for greater access controls throughout a network--not just at its perimeter.

Sound good? Well, there are some gotchas. First, a lot of components must play nicely together. Switches, routers, NAC devices, agents, firewalls and authorization servers all have roles in this enforcement, and to be successful they all must communicate. In addition, some products deliver more functionality out of the box than others. We found that Cisco's NAC, while significant, is more about laying the groundwork for third-party integration and less about providing immediately usable features.

Second, many vendor approaches require some network-design shifts. Whether it's replacing a switch or inserting a high-powered firewall, there's often some change--and cash--required. Third, different implementations require different components. Much of Juniper's strategy, for example, revolves around implementing network choke points, while Cisco's approach is switch- and router-centric. Finally, some of the impact isn't technical at all, but operational.

To fully understand the impact of various approaches, we must first understand the basic components.

So how does all of this fancy new technology work? Much depends on the vendor, and we'll go into more technical detail in "Catching Rogue Nodes." But at a high level, some common pieces exist:

» Posture-verification component. Successful posture ID requires some analysis. That analysis can be performed locally using an agent or remotely using a scanning mechanism. Both approaches have advantages and disadvantages. Agents can gather more information than remote scanners, for example, but agents require another piece of software on participating platforms. Vulnerability scanners, such as QualysGuard from Qualys (a member of Cisco's NAC initiative), have an advantage in that they can scan anything with an IP address that shows up on the network. However, they cannot query more host-centric components to find out, say, which version of a browser the system is running or which antivirus definition file is in place. Scanners also generate a great deal of network traffic.

Regardless of approach, once information is gathered, it's passed on to the authorization component. We think the agent model will become more prevalent, but being able to implement both approaches is ideal.

» Authentication component. The ability to identify which user is behind which node on the network is critical to access-control decisions. Vendors are taking several different paths to authenticate the user behind the node, ranging from 802.1x "supplicants" running on workstations that prompt users for login credentials, to HTTP-based redirection methods or "captive portal" technology, which many of us have used in our hotel rooms. The ideal approach is to be able to do either--802.1x where it makes sense and is feasible, and platform-agnostic HTTP redirection where greater flexibility is required. Like the posture-verification component, the authentication component communicates with the authorization component.

» Authorization component. Once identity and posture have been ascertained some component or ruleset must determine what permissions should be granted. The authorization component is responsible for communicating with the posture and authentication components, and communicating information to other devices within the infrastructure--for example, switches. This is a critical piece of this technology puzzle--not only is it the major decision-making locus, it's also where directory services and authentication stores will integrate. The ability of large organizations to adopt NNV technology hinges on their using existing identity stores.

» Enforcement component. Once a decision has been made about what to do with a particular node or session, there must be a mechanism to enforce that decision. This mechanism may take one of several forms, but it is typically a process that involves communication from the authorization component and then some type of action taken by a network device--switch, router, firewall--to permit or deny traffic flows.

These four components are present, in some form, in most of the vendor offerings we've examined. The similarity ends, though, when it comes to how they are implemented. In Juniper's model an Infranet Controller appliance serves as the authorization and data-gathering point, taking input from the agents and interacting with enforcement points. Enforcement is done through an Infranet Enforcement component that resides in the feature sets of select NetScreen firewalls. By comparison, Cisco's approach involves host agents that talk to the Cisco ACS server (Cisco's authorization component). The server then optionally communicates with third-party management stations for further input, and instructs compatible Cisco routers and switches to pass down enforcement instructions--for example, "Put node in x VLAN" or "Implement y access control list."

There's no right or wrong way to accomplish these tasks. But it is important to understand that moving to an NNV model might change some critical dynamics in your environment, some technical, some not so technical.

Some byproducts of a smarter infrastructure have little to do with technology and much to do with areas of responsibility, governance and operational shifts.

For starters, assuming the identity + posture = better access control equation is realized, organizations face a new challenge: troubleshooting and addressing a new breed of network access. Gone will be the days of watching Ethernet link lights and DHCP address delegation to determine if a PC's connection is working. In the world of admission control, you've got identity challenges (failed authentication), posture challenges (agent communications, patch levels, host state), and provisioning challenges (what VLAN was Craig Contractor dumped onto? Is it working? Did the switch handle the assignment?)

Think password resetting is a problem? Imagine if your switch, router or firewall administration teams had to be consulted every time a user couldn't access the network! The heat will quickly move from the helpdesk to the networking group if organizations aren't prepared for the technology and procedures necessary in an admission-controlled world. Part of this challenge must be met by vendors. Specifically, they must supply the tools to provide information to first-level support personnel. But troubleshooting and administration procedures must be built out before you go live. Based on our tests, offerings need a lot of work in this department.Another challenge involves ownership of components. Typically, desktop platform groups support the software running on desktop nodes. Network teams own switching and routing platforms, unless they're subcontracted to special LAN and WAN teams. Firewalls are owned by security teams or network teams, and authentication systems are owned by all sorts of groups. When you bring NNV and enforcement into the equation, your technology not only crosses all these domains, but it also must do so flawlessly. If your desktop agent isn't properly communicating with your authorization services, or your authorization services aren't talking to your enforcement devices, you're going to have problems.

Redundancy and availability also take on new importance. Although it may seem tactical, authentication and authorization systems and alerting frameworks must be highly redundant and heavily integrated into Level 1 and 2 helpdesk support. Every enforcement point in an NNV-enabled environment will require a communication path to an authorization point. Failure to address this requirement will break most of these models.

The Final Packets

What's our take on this brave new NNV world? For adoption to be successful, a few requirements must be met: First, there must be a real desire to move to a tiered security model. Without this, the change and costs required are too great. The complexity is high, the dependencies aren't trivial, and the interaction required among support teams is significant. Upfront costs may not seem prohibitive at first, but they aren't minor.

Second, organizations must become more conscious of what goes where and who talks to whom. Although it's a stretch now, the roles from ID-management projects can carry over to node validation and control, too. Finally, cross-platform support is crucial. Despite what Microsoft might be telling you, enterprises are more diverse now than ever. Mac OS X, Linux and embedded devices are only going to grow in popularity, and there must be agents or workarounds to support these "alternative" platforms. Right now, cross-platform support from most vendors ranges from minimal to pathetic.In the short term, weigh the risks associated with rogue or infected systems entering your environment against the costs and operational changes of NNV technology. Longer term, we see the adoption of node validation technology as inevitable, and there's a strong case to be made that we, as an IT community, should have had something comparable in place years ago.

Finally, there's a story here that has little to do with technology and everything to do with sales strategy. Although there are clearly alternatives to upgrading your routing and switching infrastructures to support NNV initiatives, IT upgrades are like death and taxes--they're going to happen. When it comes time to buy that new switch, are you going to buy the least expensive one, or might the model with extra security features be a wiser decision? Therein lies the crux of the matter. At the root of this technology strategy is a new role for security in products, a critical differentiator that might tip the decision scales.

The evolution of NNV (network node validation) technology is certainly a positive and necessary step forward, but remember that the agent-based approach is predicated on a critical assumption: responses from validation agents can be trusted by the decision-making technology "upstream." That assumption is problematic given the state of most Windows deployments. Many users still have local admin privileges, for example, which could let hostile code be deployed. This, in turn, could create problems on multiple fronts, including smart malware impersonating agent responses or disabling key security components.

Savvy readers will note that we've seen malware that's been security-agent "aware" and reacted accordingly. Hostile baddies like Bagle, Sober and Agobot targeted and disabled antivirus and firewall software on infected computers, according to virus experts at Helsinki-based F-Secure. Naco.B even went so far as to delete protective software from the system, and Klez demonstrated a dominant personality streak by terminating other worms that tried to infect the same system! It's not too far a stretch, then, to think malware writers might reverse-engineer some of these agents in the coming months and release a new wave of "agent-aware" intruders that give an "all clear" or "completely healthy" thumbs up to upstream node-validation components.

Unfortunately, most organizations won't be able to immediately execute the only real solutions to this threat: First, users and apps must run in restricted contexts that can't tamper with parts of the OS; this ranges from difficult to wholly impractical, depending on the scenario. Second, tighter hardware, software and OS integration must exist when it comes to code signing, code execution and trusted computing platforms. Until we have the technology to better shield OSs and applications from tampering, we're going to be fighting an uphill battle when it comes to "trusted clients." Buyer beware.

812.1X By the Numbers

Long term, the 802.1X authentication protocol will save enterprises money and add to security. But early adopters have run into unexpected costs and pitfalls, according to a recent Forrester survey.

15 =Percentage of enterprises surveyed with 802.1X-enabled switches

$25 to $50 per user =Cost of a robust 802.1X supplicant, like Funk Software's Odyssey client, for non-XP clients3 years =Switches older than three years may need to be upgraded to handle 802.1X

The IEEE 802.1x standard has received much attention over the years thanks to its application in the wireless world. But the standard also has relevance for wired networks, though it hasn't caught on ... yet. The 802.1x standard defines a method for port-based access control "to provide a means of authenticating and authorizing devices attached to a LAN port that has point-to-point connection characteristics." Essentially, it's a Layer 2 authentication mechanism.

802.1x will become more relevant because the standard lies at the heart of many NNV (network node validation) efforts--including Phase II of Cisco's NAC initiative--and because most switches now ship with the protocol baked in. The 802.1x model comprises a "network access port" (a switch port), an "authenticator" (a switch), an "authentication server" (such as a RADIUS front-ended auth server that communicates to the switch) and a "supplicant" (a workstation node attached to a switch port). The general idea is that, in an 802.1x-enabled switching environment, nodes must authenticate to the switch before they can communicate with other parts of the network.

There are all sorts of practical applications, but one of the more common is addressing "insecure" Ethernet jacks--which, in many offices, is just about every jack. In an 802.1x-enabled environment a random passerby wouldn't be able to simply plug his laptop in and be on the network; he'd have to authenticate to the switch on the other side of that jack before gaining Layer 2 access to anything. Of course, nodes attaching to an 802.1x-enabled port would have to be equipped with 802.1x supplicants for this model to work, and in many organizations that's a challenge.

It's also important to note that not all supplicant software is created equal. For example, the 802.1x supplicant included in Windows XP works for wireless interfaces but lacks options for using group policy objects for 802.1x on wired interfaces. The supplicant found within the Cisco Trust Agent (CTA) 2.0, by comparison, was designed to support 802.1x on wired interfaces only. Some OSs, including Mac OS X, support 802.1x in a limited manner, while others, like Windows 2000, don't support it natively at all. Fortunately, third-party supplicants from Funk Software, Meetinghouse and others are available for a wide range of platforms and typically have comprehensive feature coverage.802.1x has some design challenges worth considering. We couldn't find any implementations or even plans to incorporate any type of ticketing system, a la Kerberos, for example. Why would 802.1x need something similar to a session token? For one, organizations that are looking to adopt one-time passwords using hardware tokens with 802.1x will find themselves in a bit of a bind--depending on configuration parameters, a supplicant reauthentication process requires the passing of valid credentials, and one-time passwords are, well, one-time. Imagine making users type their token-generated passwords every few hours. Not pretty.

Challenges aside, 802.1x is moving beyond just port-level authentication; it's now (cleverly, we think) being used for relaying node-posture information to devices farther upstream by embedding that information in 802.1x frames. We suspect that 802.1x's role in our environments will only grow in importance in the coming years. Click here to learn more about 802.1x.

When we let it be known we'd be analyzing network policy control, we were deluged. We had to assign a team of specially trained editors just to hold back the hordes of vendor reps that wanted face time. This is, of course, indicative of the furor Cisco started with its NAC offering. But no über product initiative happens in a vacuum: IT is painfully aware that it must control access to corporate networks to have any hope of ending the patch-and-pray nightmare we're in today.

Thankfully, providers of what we're calling NNV (network node validation) technologies seem to be on the right track. In "But Will It Work?" we provide an overview of the major vendor strategies and lay out some use cases and operational concerns that you must consider when deciding how to implement NNV.

Note we said how not if. Although it could be some years before NNV trickles down to smaller shops, we strongly recommend IT pros get educated on the competing approaches. In "Catching Rogue Nodes" we examine three NNV systems: Cisco's NAC, Juniper Networks' Unified Access Control and ConSentry Networks' LANShield. These offerings vary wildly in scope, complexity and functionality, but one thing is certain: Making a choice will require soul searching to decide what problems your organization needs to fix, and how much you can spend to do so.0

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights