Don't believe us? Consider how far and wide a comprehensive data loss product extends its reach through your organization. User end points must run (and you must support) a new agent. E-mail and network communications will be scanned for content, and data repositories, such as file shares, databases, and PCs, will be interrogated for violations. All this technology requires policies tailored to your environment and level of risk tolerance.
And regardless of how automated you make it, DLP systems will set off alarms. The end result is more work for your administrators.
How much more work? The answer depends on the size of your organization and the level of deployment, but we've identified four areas where DLP technology will make demands on your resources.
Before you fire off your first scan to see just how much sensitive data is floating around the network, you'll need to create the policies that define appropriate use of corporate information. Some policies are fairly obvious--there's no good business reason for an employee to upload a spreadsheet full of Social Security numbers to his Facebook profile. But other policies will have to go through some contortions to accommodate workflows. For example, you may generally forbid employees from e-mailing customer information outside the organization--except for five people in one business unit who have to send records to seven employees at a business partner, but only on the last Friday of every month and never without sign-off from at least three lawyers from in-house counsel.
Out of the box, most DLP systems come with pre-built policies, especially for regulations such as PCI and HIPAA. However, you'll most likely need to build new policies or customize existing ones to meet your particular security needs, risk profile, and regulatory landscape. While many DLP vendors provide graphical, wizard-driven tools for this process, creating and tweaking policies is rarely a point-and-click exercise.
And once you have a set of policies written, it's unlikely they'll go unchanged for long. New compliance mandates may arise, new kinds of information will need to be protected, and new business practices may emerge. As a result, it's important to continually update your policies to ensure that they match all the requirements you have to meet for protecting sensitive data.
Once your policies are in order, the next step is data discovery, because to properly protect your data, you must first know where it is. In midsize to large environments, you'll have at least one appliance dedicated to data discovery and content analysis. In addition, if you need to scan massive amounts of data in parallel for information that violates your appropriate use policy, you'll need to deploy additional servers for scanning. Fortunately, many top-tier systems now support scanning huge data stores with grid technology, but the fact remains that if you have many terabytes of data that you need to scan on an ongoing basis, then you're going to have to manage more servers to do it.
Then there's the issue of accuracy. Consider the challenge of identifying a simple credit card number. That number could be stored in many different formats, and it could contain variations of numbers mixed with spaces, hyphens, or other characters. Because the DLP appliance can't determine context, you'll need to programmatically describe exactly what data you're looking for, and account for all of the different formats. Often you can do this graphically using easy-to-construct Boolean logic, but sometimes you'll need a scripting language like Perl to develop advanced data description policies.
Be prepared to test the data identification capabilities you've enabled. The last thing you want is to wade through a boatload of false-positive alerts every morning because of a paranoid signature set. You also want to make sure that critical information isn't flying right past your DLP scanners because of a lax signature set. This is particularly important if you plan to use DLP technology for unstructured information, such as sensitive documents, diagrams, or source code. Also note that your signature database is going to grow and will have to be managed.