Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

Data Classification Tips & Technologies Part 2

Even though the process of classifying data is, in theory, well understood and considered a tenet of good business and security practices, industry observers say companies are not doing it across the board—or doing it well.

“We’re endlessly surprised by how rarely it’s done successfully and consistently across an entire organization,’’ says Eric Bataller, a senior consultant with information security consultancy Neohapsis, author of a new InformationWeek report, 10 Steps to Effective Data Classification. “We’ve seen all manner of problems thwart data classification programs, but the most common culprits are insufficient management commitment, lack of training and awareness, and overly complex process and procedures.”

Effective classification of data can mitigate the impact of complex and poorly managed environments and make IT more flexible and able to adapt to business requirements, according to Bataller. Throwing technology at the problem may help--with added complexity and expense, he says--but it cannot replace policies and processes. CIOs, he emphasizes, need to exercise leadership.

There is a “whole genre” of enterprise content management (ECM) and e-Discovery software that is designed to automate and structure this process,” observes Kurt Marko, a regular contributor to InformationWeek and an IT industry veteran, “however, data classification is only one aspect of ECM, and it's not primarily a technology problem.” While technologies can automate some routine tasks like searching and cataloging data sources, he says, they “can't entirely replace the need for a strategy and processes.”

In part two of a look at the steps companies need to take to better classify the various types of data in their organizations (see part 1 here). Bataller recommends they should consider starting the process with “low-hanging targets.” Identify a system or infrastructure that is being built or re-built and piggy-back classification onto that effort, since it will provide an example of success to the organization. From there, IT can begin tackling bigger targets, like mammoth ERP systems, he says.

Then it’s time to leverage reusable models. Staff should be directed to work with business stakeholders to develop a series of standard implementations that cover most situations, Bataller advises. “Classifications should be associated with the entire technical and physical stack,’’ he says. “For example, if a given data set is considered private, all the associated infrastructure should be of the same class or higher. Having data classified as ‘private’ on a server classified as ‘public’ is a no-no.”

Companies should also consider investing in awareness and training programs so that employees will learn what and how to classify. Classification programs must include ongoing monitoring and random auditing, Bataller stresses, to verify that data and systems are being properly managed. “Remember, this is a living program. As the business changes, its data changes. Be critical of classification practices, and encourage IT and business users to suggest ways to improve.”

Finally, in order to achieve ROI, companies should work toward smarter management of the data, he says. Without a classification program, Bataller notes, all data ends up being treated the same: “managed, with high costs, or not managed, with high risk.”

Learn more about Fundamentals: How To Choose Electronic Health Records Software by subscribing to Network Computing Pro Reports (free, registration required).

Related Reading

More Insights

Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Research and Reports

Network Computing: April 2013

TechWeb Careers