Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Is Zero Trust Really Popular With Corporate Customers as Security Experts Say?

zero trust
(Source: Pixabay)

Introduced in 2010 by former Forrester analyst John Kindervag, the Zero Trust security model is considered a core industry concept. Its main advantage is the absence of the need to build a detailed threat model. If you expect an attack from anywhere, then you cannot trust anyone, and if you trust, then you should constantly double-check and verify.

How it started

Although this security concept was formulated more than ten years ago and was well-received by security experts, the actual implementation of Zero Trust did not begin immediately. Business leaders are extremely reluctant to spend money on new tools if these tools do not provide a measurable (in hard currency) effect.

Even government agencies were in no hurry to allocate budgets until the Edward Snowden case in 2013. Sure, Snowden needed some documents to do his work, but he should not have had access to most leaked documents. After such a significant incident, the conscious development of Zero Trust started. Specific recommendations and industry best practices appeared.

Today, no one argues with the need to build corporate security systems following the requirements of the Zero Trust concept.

Partial protection

When people talk about Zero Trust, they usually mean corporate networks. This is due to the high activity of equipment manufacturers promoting the concept of Zero Trust in this direction. Securing the networks is undoubtedly necessary, but the one-sided approach obscures security professionals' view of the concept.

It appears that they forget about other vectors and focus only on network security, most often in the form of firewalls, encryption, VPNs, etc. Today, it is extremely hard to find medium-sized organizations that do not have the described solutions. Less common are firewall “extensions” or separate solutions for detection and prevention of intrusions (IDS/IPS) and filtering traffic at the application level (L7-filter). Even less common are Detection of Distributed Denial of Service Prevention (DDoSP) and Web Application Firewalls (WAF).

We should also mention the widespread antivirus solutions. In general, whether it is the free antivirus like Avast or Microsoft Defender, you can find antivirus on every Windows computer, even in small organizations.

Some organizations also use security scanners or vulnerability management systems. Sometimes we see the use of Data Loss Prevention tools (DLP). At the same time, most often, the implementation of the DLP system is not finished. It searches files and text messages that cross the perimeter only by using keywords and brings many false positives. In addition, almost no one blocks the ongoing leak. Managers skip issues associated with the technical implementation of this opportunity already at the planning stage, trying to cut off some costs.

There are also installations of Security Information and Event Management (SIEM) tools. These systems are often trivial collectors of logs in a dusty archive. Usually, they do not actually notify about incidents, although the approaches of some vendors may really bring profound security benefits when correlating data from several sources. SIEM systems are often configured at the pilot stage and are not subsequently updated or improved by the customer later on.

Unfortunately, in practice, security is often limited by the use of approaches and systems that I have listed above. For example, the centralized encryption of data or disks/containers on servers is uncommon. Even when encryption is present, there is no understanding of who has access to what data and whether this access is correctly provided.

Of course, using the above-listed technologies, you can partially secure your perimeter and even sometimes identify the attack trail using the collected logs. But a good protection system cannot be built without a deep understanding of primary assets, the information systems living within the network, and data stored. How do you defend yourself from external and internal intruders, having only perimeter protection, antivirus, and a notification system signaling about some of the data leaks and a kind of logs archive? It is difficult to answer this question.

Is the situation changing?

Vendors started talking more about Zero Trust; customers, of course, began asking questions. Since historically, the strongest positions are held by providers of network solutions, their voices are louder than others. Indeed, some complex Zero Trust projects are being implemented, but many of them still have to do with access to the network and, possibly, to individual services, often only to those published outside the traditional perimeter.

As for the protection of specific assets, applications, and services, mature models of using technical tools for the implementation of Zero Trust are declared mainly by the biggest organizations; however, they also cannot always get a positive experience from it.

There is hope

Despite the sad situation with the implementation of practical solutions outside the network segment, interest in the Zero Trust model is constantly growing. Extremely popular, in particular, are Multi-Factor Authentication (MFA) and micro-segmentation at the access level between networks. Of course, such solutions do not relate to data as such but rather to restricting access to it. But moving towards modern technologies to restrict access to data and critical services is a good sign.

Today's most pressing challenge is to communicate to customers that Zero Trust is about much more than just network access and enhanced validation of access legitimacy. And the necessary steps in the right direction are already being taken. Businesses are beginning to understand that MFA and other solutions are just separate layers that complicate access for attackers by conducting checks that increase or decrease the level of trust in the actor.

The specific-to-general pattern

When we begin to provide users with access to files of a specific server granularly, usually everything is configured at the network folder and file system levels (most often - NTFS).

Groups, policies, access control lists, which are free from human-generated errors, work in a small network of a small enterprise with a number of unique rights that can be counted using all your fingers. However, in the IT infrastructure of larger companies, the situation is much more complicated. It is no longer possible to control the process manually, even on a separate server, not to mention the larger storage systems. Automation of access rights management will be required, as well as behavioral analysis to find formally legitimate troublemakers.

One of the pillars of Zero Trust is data access control, which includes the even older principle of least privilege. The same principle should be applied in the role model when granting access rights to specific applications and their internal modules. Access should be provided to each individual account, based on its business role and, possibly, additional dynamic conditions. Access should be provided only to data and functionality that are necessary to solve business problems "here and now."

To do this, you will have to use Data-Centric Audit and Protection (DCAP) systems, as well as complex ticket systems for an automated provision of access requests. You can start with simpler ticket systems and integrate them with more complex ones, such as Identity Management / Identity and Access Management (IdM/IAM) systems. At this stage, it is also recommended to provide access in a standardized way, based on specific business roles.

To prevent a compromised computer or user from being able to remain at the privileged level, in addition to a system for responding to its suspicious and, possibly, malicious activity, it is worth implementing multi-factor authentication and advanced Network Access Control (NAC) and/or an external access broker - Cloud Access Security Broker (CASB), which performs approximately the same functions, but for cloud systems and applications.

The last two systems mentioned analyze devices trying to gain access and calculate a risk level based on their hardware/software configuration, as well as the user account used. Based on the identified risk indicators, security policies are applied to them, and the required access is provided or not.

If access is granted, it does not have to stay open all the time. Even during one connection session, protection systems should periodically check whether the security state of the connected devices has changed, and also receive signals about the need to re-perform an entire security test cycle.

Conclusion

Of course, the suggested scenario is not yet a full-fledged implementation of the Zero Trust concept, but it is already a fairly significant step in the right direction. It should always be remembered that today's attackers have plenty of free time and motivation to penetrate the network. Realizing this, you need to build internal barriers and thoroughly monitor everything that happens in order to identify the actions of intruders. It is the Zero Trust strategy that can provide a decent level of protection, which will be effective against data breaches and modern cyber threats.

Alex Vakulov is a cybersecurity researcher with over 20 years of experience in malware analysis.