The Threat From Within

Vigilance is the best defense against malicious insiders attempting to steal sensitive information. (Originally published in IT Architect)

August 1, 2005

39 Min Read
Network Computing logo

Malicious insiders represent today's toughest challenge for security architects. Traditional database security tools such as encryption and access controls are rendered useless by a trusted employee who has--or can easily obtain--the right credentials. In addition, more users in the enterprise are getting database access, including DBAs, application developers, software engineers, and even marketing, HR, and customer support representatives. And whether spurred by revenge or tempted by easy money, insiders can sell their booty on a bustling information black market.

At the same time, enterprises are under increasing regulatory and market pressure to protect sensitive information. Thanks to recent laws, businesses are often compelled to report database breaches or information loss. The resulting public relations disaster can destroy customer trust, invoke government and industry fines, cause stock prices to plummet, and bring class-action litigators running. The bottom line? Enterprises that don't address the insider threat may find themselves strung up on the twin gallows of regulatory penalties and customer outrage.

The only solution to this problem is vigilance. Security and data center teams must roll up their sleeves to implement and enforce a set of best practices to address the insider threat. IT architects must then bolster policies by using the access control, auditing, and other security features built into all major database platforms.

Architects can further enhance their enforcement with an emerging group of products that provide increased scrutiny of authorized users. Sometimes called database Intrusion Detection Systems (IDSs) or enhanced auditing solutions, these monitoring products examine the behavior of authorized users.

INSIDER THREATS: THE PERFECT STORMInsiders aren't the only threat to sensitive information. As recent breaches have shown, sensitive data can be acquired through malware planted on vulnerable machines, lost on unencrypted backup tapes, or sold outright to scam artists.

However, many threats against the database can be countered--or at least mitigated--by common security practices. For instance, in addition to limiting physical access to the data center, organizations can limit who has access to sensitive information through the use of access controls. Database applications and the underlying OS can be regularly scanned for vulnerabilities and patched when appropriate. Administrators can lock down a database the same way they would a Web server by removing unnecessary services and accounts, enforcing strong passwords, locking out accounts after a predetermined number of failed login attempts, and so on.

Even novel attacks such as SQL injection, in which an intruder feeds SQL commands into a Web application to force information from the backend database, can be countered by better software development and the new breed of Web application firewalls. Lastly, encryption can protect data in transit and at rest.

Unfortunately, none of these practices will deter a user with the appropriate credentials and a willingness to steal. Malicious insiders possess the ultimate security trump card: authorized access to critical information.

This problem is compounded by organizational bad habits. For the sake of convenience, many organizations ignore the strictures of a rigorous authentication and access control plan and share IDs and passwords among users or DBAs. Insiders can also take advantage of the fact that coworkers, managers, and executives have a hard time mistrusting their own employees."In many companies, you have high-privilege users with well-known passwords that haven't been changed in years," says Ron Ben Natan, CTO of database security company Guardium and author of the book, Implementing Database Security and Auditing (Elsevier, 2005).

As more users within the enterprise get access to sensitive information, the likelihood arises that an opportunist will see an opening. "Temptation is probably the biggest risk leading to ID theft," says James Koenig, co-leader of the PriceWaterhouseCoopers privacy practice. "People get creative if they get access."

High-profile cases bear this out. Early this spring, eight employees from four different banks, including Bank of America and Wachovia, sold more than 670,000 customer account records to an identity thief.

Regardless of motivation, once those insiders steal the data, a voracious underground market for identity information has made it easier to sell the stolen goods. In June 2005, The New York Times reported on two Web sites, and, that let anonymous buyers and sellers conduct transactions in stolen information. IRC channels are another popular venue.

COMBATTING INSIDER THREATSAs mentioned, general database security best practices mirror many network-based best practices. But when it comes to the insider threat, security architects have to dispel a comforting illusion--namely, that the database is secure because it's nestled behind an array of firewalls and intrusion detection or prevention systems and has been scanned for vulnerabilities. True vigilance demands that architects craft policies directly designed to thwart abuse by insiders.

Best practice number one is to limit employee access to sensitive information. "Companies give too much access to information because it's easier to grant access to an entire department rather than determine who the few people are who need access to actually perform their duties," says Rebecca Herold, a consultant on information privacy, security, and compliance and author of several books on information security.

Access rights should also be conditioned on employee status. If an employee changes roles or undergoes disciplinary action, for instance, his or her access rights should be adjusted appropriately.

The next step is to put controls in place to log all access to sensitive information, including who accessed the information and how they used it. Oracle, IBM, Microsoft, and Sybase all include logging functions built into their platforms.

For example, Oracle's recently released 10g2 database platform includes a new Advanced Security option that offers detailed auditing. "You can assign an audit policy to a table or column to look for suspicious activity," says Paul Needham, director of product management for database security at Oracle. "You can attach audit policies to specific records or columns so an audit record is only generated on specific conditions, such as information being accessed during off hours."

Oracle also supports strong authentication through the use of digital certificates that can be deployed on smart cards, tokens, or employee PCs.

Microsoft's forthcoming SQL Server 2005 also boosts native security capabilities, including the ability to enforce the use of strong passwords and to create more targeted permissions for database users.

Unfortunately, many organizations fail to take advantage of all the security capabilities built into database platforms. "Organizations are under increasing pressure to get their applications and systems launched quickly," says Herold. "Oftentimes they don't spend enough time looking at how they can use [native security capabilities] to manage risk."

A critical subset of users, including DBAs and any systems administrators with access to sensitive information, should be closely monitored. Third-party solutions from companies such as IPLocks, Guardium, and Application Security can augment native database logging and audit capabilities (more on these later).It's also advisable to take auditing functions out of the hands of DBAs and put into the hands of a compliance officer or other group, such as a security or network operations team. "If you're a DBA, there's no reason you can't modify data in a table, and the audit trail becomes poisoned," says Guardium's Ben Natan. "You need to preserve an independent audit trail."

Finally, architects should promote a culture of security awareness in the data center. DBAs tend to focus on data performance and integrity to the exclusion of other concerns. In today's threat climate, that's no longer acceptable.


Though all major database platforms have built-in access controls and auditing functions, administrators and auditors may find it cumbersome to search through raw logs for unusual behavior. Third-party database monitoring tools can augment database auditing and logging functions by aggregating and correlating log events and searching for deviations from normal activity patterns.

Database Monitoring SystemsClick to Enlarge

These products send alerts when anomalies are detected and offer a single repository of audit information across multiple database platforms. They can also help enforce access controls and in some cases even halt malicious activities. All these third-party products are designed for Oracle, Microsoft SQL Server, Sybase, and IBM DB2 (see table at left).

It's important to note that third-party monitoring systems should be run by a group other than the database team, such as a security or network operations team.

However, to get the most out of a database monitoring product, architects will have to invest in some training. "The infosec team needs to understand SQL and some of the main constructs in the database," says Ben Natan.

Keep in mind that these tools also have serious drawbacks. For one thing, they require staff to monitor and analyze events and respond to alerts. They may also generate false positives and affect database performance. And depending on how they're deployed, they can even miss critical activity entirely.THREE APPROACHES TO MONITORING

The database monitoring vendors take three approaches to collecting data. The first consists of a passive server or appliance that plugs into a tap or span port on a switch and monitors traffic into and out of the data center, much like how an IDS sensor watches network traffic. Application Security's AppRadar product and Tizor Systems' TZX 1000 take this approach.

The second approach takes an appliance and puts it directly inline so that it intercepts and passes along packets as they pass into and out of the data center. By sitting inline, such devices can drop traffic based on a set of rules. Imperva's SecureSphere and Guardium's SQL Guard can be deployed inline or as passive monitors.

Both the inline and passive monitoring systems include the ability to detect attack signatures specific to database platforms. For instance, AppRadar can send alerts about attacks against known database application and OS vulnerabilities, while Imperva uses Snort-compatible signatures to detect the same kinds of attacks. By focusing on database-specific signatures rather than a complete set of general network attacks, these products reduce the risk of generating false alarms.

The final approach to collecting data pulls information directly from the database. An agent or server interacts with the database to record information such as access to the database and modifications to information stored there. Vendors using this approach include Lumigent Technologies, which deploys individual agents on each database that report back to a central monitoring station; and IPLocks, which deploys a server that polls individual databases at regular intervals. These products can gather and analyze logs from disparate database platforms into one central location, as well as allow a security or network operations team to monitor the behavior of privileged users.As the table below shows, each method has benefits and drawbacks. Passive monitors won't affect database performance or add network latency, but they also can't detect the activity of DBAs and other high-privileged users who access the database directly rather than over the network. They're also less effective at actually blocking attacks because they rely on TCP resets, which may be initiated after the attack or policy violation has occurred. This is a significant drawback given the irreversibility of information theft--once it's gone, there's no getting it back.

However, the vendors of passive monitoring systems say a majority of businesses aren't ready to let a machine make blocking decisions for fear of killing legitimate transactions.

"We could prevent an action, but none of our customers want us to do that," says Adrian Lane, CTO of IPLocks. "The damage that could be caused is more costly than the security threat itself."

Other passive monitoring vendors agree with this sentiment, but they're also hedging their bets. Just as intrusion detection morphed into intrusion prevention, it appears that database monitoring products will eventually include some form of blocking capability, even if customers don't turn it on. "We'll add some mitigation features, perhaps by the end of the year," says Ted Julian, vice president of marketing at Application Security.

Inline monitors can kill unacceptable activity immediately, thus preventing a potentially devastating attack. However, they also may generate false positives and block legitimate activity. Inline monitors can also add unacceptable latency to high-volume transactions, and like their passive monitoring brethren they're blind to direct access on the database.Systems that pull information directly from the database don't suffer the blind spot of network-based monitors and thus get the most comprehensive coverage of user activity. However, they draw on CPU resources from the database because they have to access system memory, either by residing directly on the database or by logging in to the database to pull audit records. This method may affect the performance of the database.

Moreover, while these monitors can alert administrators to unauthorized or unusual behavior, they can't do anything to stop it. Agent-based systems must also be deployed directly on the database itself, a fact that may discomfort DBAs who are reluctant to load additional software onto production systems. And any enterprise with a large number of databases will be wary of the additional management load that hundreds of agents represent.

However, given the fact that network-based monitoring systems can miss the activity of any user who logs in directly to the database, several third-party vendors have or plan to include agents as part of their solution.

For instance, Guardium supplements its SQL Guard network monitoring product with a software probe called S-TAP that can be loaded onto individual databases. The S-TAP software will send logs of local access back to the Guardium appliance. These agents are available for all four major database platforms. Imperva also plans to launch an agent this month to supplement its network-based appliance. Application Security has a database agent for SQL Server and a network monitor for Oracle, but it also plans to introduce agents and network monitors for all four database platforms.

Note that IPLocks avoids this issue because its server-based product logs directly into each database to pull logs and events back to the IPLocks server for analysis.KNOWING WHAT TO WATCH

Third-party database monitoring systems live and die by the policies that determine whether alerts are sent and transactions are blocked. While each product takes a slightly different approach to the problem of monitoring user and application behavior, they all operate on the same basic principle: Create a baseline of normal activity, then sound an alarm for deviations from that baseline.

Baselines can be established for users to see what kinds of information they access and what kinds of queries they issue. Other factors, such as time of day, the users' source IP and MAC addresses, and the applications they use, also become part of the baseline. Administrators can then set alerting thresholds, either based on deviations from the baseline (for instance, Joe User selected 30 percent more records today) or for hard and fast rules (Joe User isn't allowed to see social security numbers).

For example, Lumigent's Entegra agent software uses three different modules to collect information to send to a server-based repository for analysis. A data modification agent records every change made to the database. A data view agent monitors access to the data. A structure agent looks for activity relating to logins and permissions (for instance, a DBA suddenly granting a low-level account a greater range of authority to select tables).

IPLocks also monitors three subcategories of information: objects, such as specific tables or columns; users and their activities; and sessions, which allows IPLocks to determine if transactions have valid access to the database.The product comes with a number of generic policies that are easy to deploy. However, administrators will have to create their own policies and define rules for specific tasks, such as monitoring the behavior of a particular DBA.

Imperva generates policies automatically. The appliance is deployed in learning mode. It monitors database traffic in real time and profiles all the users and applications, the tables accessed, the operations on those tables, stored procedures, and the IP addresses that users come from.

The profile baseline is built from statistical use patterns. "If it sees a user X number of times over X number of hours, a user gets added to the profile," says Mark Kraynak, director of product marketing at Imperva. Once the product establishes a baseline, it then looks for deviations. At this point, administrators can set policies for when the appliance should send an alert and when it should block new transactions. Once the learning period is complete (which could take days or weeks, depending on the size of the data center), the appliance can be switched to blocking mode.

Of course, it's entirely possible that the product will be running in learning mode while an insider is stealing data. Thus, Kraynak says it's important for administrators to review the results of the learning mode to ensure that unacceptable behaviors haven't been built into the system. The system also needs to relearn for new users or existing users who may not have been using the database during the learning period.

Good policy creation is essential to keeping down the number of false positives. "Customers casting a broader net will get more false positives," says IPLocks' Lane, "but that's because they want to see anomalous events they weren't expecting. This could flood you, or provide you with useful information on how to adapt your threat model."In addition, because these tools deal with a company's most sensitive property, administrators will likely have to respond to every alert (or at least every critical alert), so architects should be prepared to devote human resources to monitoring and analyzing these systems.

Andrew Conry-Murray, technology editor, can be reached at [email protected].

Privacy Panel Discussion

As part of IT Architect's Data Privacy series, we invited three security experts to discuss critical issues relating to information protection. Selected excerpts will accompany each article in the series, but we've posted the entire discussion here. You can click on a specific question to jump to the responses, or read the full discussion.

1) Is the recent rash of data privacy outbreaks the result of a dramatically growing problem, or are companies now forced to publicize them?

Rubin: The former. The level of malicious penetration has dramatically increased in recent times. This is due to wider proliferation of zombies, continuing vulnerability of Windows, people not keeping up with patches, and increased sophistication and personal networking among the bad guys.Herold: I agree, with some caveats. Technology is more complex than ever before, businesses are sharing information with other businesses more than ever before, and information handling and processing capabilities by personnel and business partners are more mobile and diversified than ever before. These situations have created even more risks than in the past for security breaches to occur. Security is much more under the control of end users than ever before, and most end users aren't aware of the risks or how to properly protect against privacy and security incidents.

However, there are still the old security breaches occurring that have been the centerpieces of recent incident news, such as losing backup tapes, having printed confidential information distributed publicly because of improper handling and disposal practices, and fraudulent use of information by authorized employees. The number of insider incidents by personnel in trusted positions continues to grow. So yes, the problem is growing because there are more ways and opportunities for breaches to occur-both technically and non-technically.

Privacy is also a topic that wasn't really talked about with much significance in the business world up until around five to 10 years ago. Incidents are occurring now that weren't considered incidents back then. Now that consumers are more savvy and aware of the risks to their privacy through improper business handling practices, and with the advent of emerging complex and mobile technologies, businesses are starting to become more aware of the risks to privacy, but they still haven't dramatically changed their information-handling practices, which also contributes to having more privacy incidents.

Richardson: A question for Rebecca: For starters, when you say "security is much more under the control of end users than ever before," I find myself wondering which of a couple of things you mean. Do you mean individuals have to look out for themselves more than ever? Or do you mean we entrust more security functions to end users?

Herold: As recently as 10 years ago (and probably even less than this), most of data security could be centralized. The information was primarily structured data under the centralized control of the infosec (or IT admin) area, stored on a mainframe, accessible through terminal sessions, and unable to be downloaded to desktops or mobile devices. Technology advances have changed this dramatically. A few studies from 2004 demonstrate this: An IDC study revealed that unstructured data (very generally, word documents, spreadsheets, e-mails, and other types of documents that end users ultimately decide how to distribute, protect, and so on) doubles every two months in large corporations. A Goldman Sachs study showed that 90 percent of data within a corporation is unstructured data. A Meta Group survey showed that the average large corporation generates over 300,000 documents per month in the performance of mission-critical processes. End users are ultimately responsible for securing information more than ever before-they're controlling how the information is secured.So yes, end users are entrusted with more security oversight and functions than ever before by the very utility of new technologies. It's not a choice organizations have really consciously made, but one that evolved from having to address the proliferation of handheld computing devices, wireless computing, and work-from-home situations, among other things. Hopefully the end users have received the training and awareness-not to mention the policies and procedures to follow-to do what's appropriate to safeguard this information. However, it's likely that they don't receive enough information necessary to adequately protect the information they're sending in e-mails, storing in PDAs and BlackBerries, and processing on their home family laptop.

Richardson: I've been asked by newspapers a couple times recently whether or not there are actually more data theft incidents, or whether there are just more of them coming to light because of legislation requiring disclosure. I've tried to think about this in light of the CSI/FBI survey statistics over the years. They've shown organizations reporting fewer and fewer intrusions to the police (in 2001, 36 percent of respondents who had suffered an intrusion reported it to the police-it's dropped to 20 percent since then). If there's less reporting, but we're hearing about more incidents, then one might argue that there must be lots more incidents. But I don't buy that argument, to tell you the truth. The real point is that organizations have never been eager to talk to the police, let alone their customers. Because organizations fear public and legal backlash, they're being more responsible about disclosing incidents. But having given it some reflection, my opinion is that there hasn't been an overall increase in incidents-companies are just talking about it more openly out of fear that not talking about it will make things worse in the long run.

As an aside, it seems likely that many organizations aren't reporting incidents even though they're legally required to do so. Something on the order of one in five of our survey respondents say they've suffered theft of proprietary data. Let's say that only one quarter of these crimes involve the theft of customer data. That would still suggest that one in 20 of our survey respondents had a data breach that they might need to notify customers about. There were 500 respondents last year, meaning that one might have expected a couple dozen such notifications. And this from only 500 respondents-think how many corporations there are in the country. So far this year, we've had something of a media frenzy looking for these kinds of crimes and really only a half dozen or so have surfaced. There probably should have been hundreds. I harbor a strong suspicion that most such breaches go unreported (and always have).

While I believe the number of incidents is the same or perhaps even lower, I also believe the potential damage from crimes where private data is stolen has increased dramatically, but that's an issue for another day. Furthermore, it seems pretty clear that there's more incentive to steal such data these days, so if the number of breaches hasn't increased yet, it seems likely that it will, simply because there's more and more money at stake.

Herold: I agree the damage from crimes has increased dramatically. However, I disagree that the number of incidents are the same or lower. Incidents can involve more than just fraud or theft. Some incidents are accidents, some are malicious, and some are the result of sloppy management of personal information that customers discover and publicize or take to court. Some incidents are wrong decisions organizations make to share personal data with third parties and then the third parties end up misusing the data, and some incidents are never reported because the organization doesn't realize there's personal information involved (for example, a laptop lost containing a database of customer personal information). Because data processing, handling, and storage is now so widely dispersed and decentralized, and because there's more personal data in existence than ever before, it follows that there will be more incidents. Security is now predominately in the hands of end users, and technology evolves much more quickly than the security that can be applied to it. More incidents will occur as security tries to catch up, and as the caretakers of personal information don't realize the risks involved and make poor security-related decisions.Richardson: Following along behind Rebecca's comments, I agree with pretty much all she has to say with regard to the prior centralization of databases, say, ten years ago. I may have been a bit sloppy in my earlier remarks. What I was trying to focus on, however, was the question of whether the latest spate of media coverage of data breaches is a result of more incidents, or more reporting by companies. It's certainly true that if you go back far enough to pre-Internet days and then even further back to pre-PC networks, the whole cybercrime picture is entirely different-absolutely. But if we merely go back prior to the California disclosure law-and to be honest I don't think anyone has the data to show one way or the other for sure-I suspect there were as many of at least some kinds of breaches, such as backup tapes getting lost in the mail, fraudsters bilking credit data brokers out of their customer data, and so on. I'd argue that there were probably as many data mishaps in, say, 2002 as there will be in 2005.

Or at least there are as many incidents where databases are breached and lots of records are stolen. Rebecca made a great point that more and more data is also being stored in unstructured formats. This data is no doubt leaking off lots and lots of individual desktops. I'm a little skeptical about the Goldman Sachs 90 percent number mentioned, but the main point holds that there are a lot more spreadsheets floating around on individual desktops with little or no real security. It's hard to say what the real overall impact of this is (I've got a lot of company data on my machine, for instance, but I don't think any of it could be used for identity fraud), but it's definitely worth thinking about in addition to worrying about more conventional network and database breaches.

In any case, the stakes are higher. top

2) Do you think laws such as California's SB 1386 will actually prompt companies to implement better privacy and security controls? Or is it more cost-effective to maintain the status quo and then just write off whatever fines and bad publicity accompany a breach? (Or, more cynically, are these laws just more incentive to cover up a breach?)

Rubin: I think the law is a step in the right direction. If companies are worried about embarrassment, they're more likely to clean their own house. Currently, there's little incentive for companies to protect data about other people who aren't their customers.Herold: I believe that in an idealistic world, laws wouldn't be necessary. However, realistically, businesses will only spend money if they can see a return on investment, so laws are definitely needed to provide the motivation for organizations to significantly improve their security and privacy controls. Some companies are gambling that they'll either not have anything bad happen to them, or that if something bad does happen, that no one will find out or that none of the regulators will ever come knocking on their door to check their compliance. However, with the increased number of news items reporting the wide range and variety of privacy and security incidents, organizations are starting to realize that they need to be prepared or face not only fines, penalties, and possible civil actions, but also be significantly impacted through lower stock values and decreased value of their corporate brand following an incident. Smart business leaders realize that they can't afford to take that gamble-the odds are becoming less and less in their favor.

Richardson: I think legislation has had a net positive effect on the security of most organizations. Some studies suggest that more money is being spent on security. Businesses in some sectors (medical, financial) surely must have decided that they can't afford to be blasted in public by a large-scale incident. But my take is that current legislation doesn't really get the whole job done because individual citizens still have no real protection or redress from crimes against them occurring as a result of sloppy security at organizations that may store information about them even without their knowledge. Given the increasing reach of corporations and the personal data they store, this really isn't

3) What measures should organizations take to prevent data leakage when portable devices are lost or stolen? Laptops are the obvious examples, but PDAs, cell phones, USB drives, and even iPods can store large amounts of data, making them tempting targets for thieves. Encryption seems like an obvious solution, but if this is used, how can the cryptographic keys be stored? (For example, keys generated from passwords are vulnerable to dictionary attacks, and users are prone to leave their token cards alongside their laptops.)

Rubin: I have a solution to this. Since I use a Mac, there's something called an encrypted file system. It exists in Windows as well. I create logical drives that are encrypted and stored as encrypted flat files. When I mount them, I'm prompted for a passphrase, which I make very long and hard to guess, and which I remember because I've been using it for a long time. Once the drive is mounted, it appears as a normal directory. When I unmount the drive, it reverts to an encrypted flat file. On a Mac, you can add the key to the key ring and keep the key ring encrypted. These encrypted file systems are actually easier to use than they sound, and you can move encrypted drives around like regular files. It gets a little trickier on mobile devices. These systems have such insecure OSs right now that my advice is just not to put anything sensitive on them, whether it's a PDA or a cell phone. If you lose your cell phone and someone finds it, even if you password-protect the data it's pretty easy to recover the information.

One thing that's intrigued me lately are the so-called "phone home" services where a stolen or lost laptop tries to connect to a security service whenever it's connected to the Internet. This enables one to find a lost laptop and can lead to the thieves being caught. These services are gaining in popularity, although they've been around for a while. Who knows? Maybe in the future, thieves won't connect a laptop to the network until they've wiped it clean after stealing whatever data they wanted.In summary, don't put anything sensitive on USB drives, iPods, or cell phones. Do use encrypted file systems on your laptop. And do get a "phone home" service so that your laptop can get in touch even if it's stolen.s stolen.

Richardson: I absolutely agree that encrypted volumes on hard drives are a great way to go. On current machines, the overhead of the encryption doesn't seem to particularly affect user-perceived performance, so the only downside is having to remember one's password or passphrase. People who have good schemes for creating memorable passwords (or folks like Avi, who just keep using the old one for a long time, which theoretically is a no-no but not nearly so problematic when talking about access to mount a disk volume because you have to be at the machine to do this) should have no problem with this.

I've also taken to keeping sensitive data (though, frankly, precious little of the stuff I work with is actually very sensitive) on USB memory sticks with encrypted volumes on them. You can do this automatically with some USB drives from Memorex, but there have been some vulnerabilities found in how these devices store the password, and I'm not sure what the status of this problem is at present. You can also do it using software to create a virtual drive on any plain-vanilla USB drive. When I travel, I keep the USB encrypted drive separate from the notebook. If the notebook is stolen, the sensitive stuff isn't even on it. If the USB drive is stolen, it's encrypted.

Avi is right that security isn't too impressive on PDAs and cell phones these days, but I wouldn't say it's an all-or-nothing proposition. It's possible to use encrypted Microsoft Word files on Palm devices, for instance. That's less-than-perfect protection, but for discouraging thieves who are just looking over the freshly stolen device to see if there's anything good, it's probably a sufficient deterrent. There are also third-party device password programs that will clean all the data off a PDA after a user-configurable number of failed login attempts. There are some issues with these utilities (some won't wipe installed memory cards clean, for instance), but provided that you're aware of them, they make for far better security than most people have on their desktop computers.

Unless your mobile phone is also a PDA and you can use a third-party utility to lock it down, I'd completely agree with Avi that nothing vital should be stored on it right now.As for people keeping their tokens next to their notebooks (my token is sitting right next to my machine as I type this), here's my modest proposal for the industry: RSA or somebody else in the token business should team up with Timex or some other watchmaker and add token generation to all its digital watches. Press a button, get a one-time, six-digit number. Since it would be available across a broad range of watches, people could pick something they could stand to wear (maybe pretty much what they're wearing already). This way, people would always have the token on their wrists, not sitting around somewhere waiting to be lifted. The watch could be stolen, of course, but most people know right away when their watches have been stolen, so they could immediately report the theft.

In the meantime, I think tokens still make a lot of sense. But to complement them, we have to require a password (which all the systems I've seen do), and we have to strive to get users comfortable with using and remembering strong passwords.

Of course, all this aims at a technical solution. In many ways, a better solution is to not allow certain kinds of privacy-related data on those laptops in the first place, but this is more of a policy issue than a technical one.

Rubin: For some time now, I've been advocating against password aging. I spent a bit of time in my last book, Firewalls and Internet Security (Addison-Wesley Professional, 2003), writing about that. When people are forced to change passwords, they invariably pick something that's derived from the last one (think rover1, rover2, rover3, and so on). Or they write them down and post them near the computer. My philosophy is:

1. Pick very good passphrases (not passwords, and include spaces).
2. Never ever write them down.
3. Never use them over an unencrypted connection.
4. Don't change them unless you have some reason to believe they were compromised.
5. Every once in a while, break rule number 4-like every three years.I know this is controversial, but I can argue (and have) that changing passwords every 60 days leads to more security problems than it avoids, not to mention calls to the help desk to restore passwords and the social engineering attacks that go along with that habit.

Richardson: I agree with Robert for the most part. I think changing passwords every 60 days simply doesn't work in practice, unless you only have one or two to keep track of. The way I approach passwords these days is to have a "base" password that I change every year or two, plus a system for altering the password in different contexts (at different Web sites, for instance). To be perfectly honest, I have a couple of different base passwords, one of which is for less secure systems, such as Web sites I don't necessarily trust. This is less than perfect in some ways, but does mean that I always use a different password on each different system, and that I can always recall what each password is, even if I haven't used it in a long while.

I also think that passphrases are a great idea, but there are an awful lot of systems where you can't use them, at least not yet. I think we're on the same page (as it were) where not writing passwords down is concerned, though I will concede that I also see there's something to be said for the less conventional argument that it's better to create strong passwords and write them down in a safe place as a way of avoiding the use of weak passwords. Bottom line, though, I think there's no reason why strong passwords shouldn't be memorable as well as strong.

Herold: "I think it's helpful to look at this and other information security and privacy issues from a couple of different ways-the ideal measures to take, and the practical or realistic measures. There are large numbers of personnel within organizations who have decided for themselves to use their own mobile devices. I know organizations where large numbers of marketing, sales, research, and other personnel load customer, consumer, and even employee data onto such mobile devices so that they can continue to work while away from the office.

The specific precautions a company chooses will depend on its own unique environment, business requirements, and the results of evaluating its risks (see "Privacy Policies For Portable Devices"). I also like the idea of "phone home" services or tools, as well as encrypting logical drives or even full disks."top

4) When IT architects have limited time and resources to devote to keeping data private, which core technologies and practices should they focus their efforts on?

Rubin: I think having a clearly defined policy for user-identifying data is a must. Once a policy is in place, there needs to be a procedure for ensuring adherence to the policy. Technologies that can be used include data masking (where, for example, only a few digits of a credit card number or social security number are kept), encryption, and data destruction.

Herold: I agree with Avi that establishing clearly defined information security and privacy policies is necessary, as is implementing procedures to support the policies. Additionally, such policies and procedures must be strongly and visibly supported by executive management-most ideally by the CEO. If you want personnel to comply with policies and procedures, you must have strong leaders that communicate the importance-and requirement-of compliance. Following this, then, is the need for a strong, effective, and ongoing training and awareness program to communicate to all personnel how to comply with the privacy and security requirements during the course of their job activities, and make security and privacy a component of each position's job requirements and a consideration within each employee's yearly appraisal or review.

There are also several other core practices to focus on. You reference specifically the IT architects, but information privacy and security must become integrated into all business practices, from the beginning of the business service or product lifecycle to the retirement of the service or product throughout the entire organization-it's not something that can be successfully addressed through technologies alone.However, if you want to focus just on IT and privacy management, I believe IT architects need to identify all the Personally Identifiable Information (PII) that their organization handles and map the flow of that PII from the points where it was collected through their networks to the points where the PII leaves the organization. Perform a privacy impact assessment to identify where the greatest risks are to the PII, then based upon risk apply the appropriate technologies. In some organizations, this may mean the implementation of encryption on all mobile computing devices; in others, it may mean implementing third-party data processor security requirements; and in still others, it may mean implementing records retention and effective data disposal solutions. It may also mean heightened efforts for legal and regulatory compliance actions, or it could be a combination of these and other actions. Organizations need to focus their efforts based on their own unique risks, which will vary from organization to organization. A mistake organizations often make is trying to implement the same risk management processes, technologies, programs, and so on that another organization has implemented successfully. Information security and privacy management within an organization must be unique to that organization's environment and risk levels, not based on someone else'

5) A relatively new technology category has cropped up that's referred to as "information leak prevention," "extrusion prevention," or "content monitoring." It's usually a combination of hardware and software that sits on a network and reads data going by, matching it against databases or fingerprints of private or confidential data, and quarantining anything that shouldn't pass through the firewall. Some vendors are Reconnex, Tablus, PortAuthority, and Palisade Systems. What do you think of this technology, and to what extent could it provide an answer to the data privacy problem?

Rubin: Full disclosure: I'm on the technical advisory board of Tablus, and I have an equity interest in the company.

I think this approach offers great promise. While false positives can be annoying, I can't think of a better way to monitor so that only "approved" content gets out of an organization. Clearly these technologies are easy for a clever attacker to defeat. However, they're effective against the 98 percent of users who aren't computer whizzes. Also, they're extremely useful against accidental and unintentional leaks of sensitive information.

I don't think any technology is going to provide the "answer" to the data privacy problem. The necessary ingredients are user awareness, helpful technologies such as the extrusion prevention ones mentioned here, vigilance, and adherence to data destruction, encryption, and access control policies.Richardson I don't own a stake in any content monitoring solutions, but I nevertheless think they're pretty interesting and probably should be more widely deployed. Even for solutions where it's trivial for a hacker to bypass the monitoring, there's significant value simply in stopping accidental disclosures of information (such as forwarding sensitive e-mails offsite). Furthermore, in the case of hackers outside the perimeter, these monitoring solutions may be more effective than they would initially seem. If a monitor looks specifically for patient identity numbers at a health care organization, you might think the hacker could find the data, encrypt it so that it no longer had visible patient numbers, and the monitor would be useless. But it's hard to imagine that an outside hacker could find the numbers in the first place without looking at a few of them (and thus causing them to be sent across the wire and stopped at the perimeter). Obviously, whether this works or not depends on the specific situation and the sensitivity of the monitor, but I think this holds some promise.

What I think holds more promise, however, is designing applications and databases so that the sort of data masking Avi mentions (along with encryption, where appropriate) is a routine component. I think the real solution to privacy is to make it very hard to steal information in a form that can actually be used for fraud and to track uses of data so that alarm bells will sound when stolen data is used to attempt a crime (you can't do this sort of thing now with, say, social security numbers, but that has as much to do with the outdated design of social security numbers than anything else).

Herold: This is indeed very interesting. I've noticed the emergence and growth of such technologies in the past few years, particularly with regard to organizations trying to comply with regulations such as HIPAA and the Gramm-Leach-Bliley Act. Such technologies create an intriguing dichotomy: They're being used to help preserve privacy by preventing the leakage of private information and protecting access to privacy information, yet they're also seen by some as tools that infringe upon the privacy of the people sending and receiving the information because of the effect of covert monitoring of employees or others that results. It would be an interesting exercise to examine whether or not the use of such tools in some jurisdictions would violate the laws there. For instance, would use within an organization in the United Kingdom violate the Data Protection Act of 1998? Would the use of such tools in France violate the Labour, Civil, and Criminal Codes? Also, would the use of such tools violate an organization's own published information security and privacy policies, particularly with regard to employee privacy and resulting in the need to update the policies prior to implementation of the tools? When evaluating whether or not to implement such tools, organizations should consider these other issues as well.

By the way, I don't own a stake in such tools either, but I find the conversations with vendors of these tools quite fascinating.

I strongly agree with Avi and Robert. Security and privacy must be built into all applications and network solutions from the very beginning of the lifecycle and follow through to the expiration of the application or

Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like

More Insights