Kevin Fogarty


Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

Systems Management Dilemma: How Much Is Too Much?

There's a debate brewing among network systems management gurus: How far must one go to get effective, timely information from the unmanageable mountain of super-granular performance data that their over-instrumented, overly chatty equipment keeps trying to provide?

On one side is VMware, which responded so vigorously to customer complaints about nonexistent tools to manage virtual servers that, just a couple of years later, it felt compelled to buy big-data analytics companies to help sort through the resulting mass of updates.

More Insights

Webcasts

More >>

White Papers

More >>

Reports

More >>

Using tools such as Log Insight, which it bought from developer Pattern Insight in August, VMware evidently plans to add big-data analytics and data mining to its systems. It's also adding a network management suite to make it easier for administrators to consider all the real-time machine-to-machine data supplied by thousands of networked devices while still finding only the answers they need.

On the other side are those who appreciate using data efficiently rather than profligately. For example, there's Shmuel Kliger, founder and chief technology officer of VMware partner VMTurbo, who said any network systems management setup that dumps so much raw data on administrators that they need big-data analytics to sift through it for answers is fundamentally flawed.

"Data center operations are getting more complicated, so approaching it in the same way as older systems vendors who keep adding point tools to handle every new demand makes trying to manage it all even more complicated," Kliger says.

"There may be thousands of data points in the configuration of servers, workload placement, capacity planning, CPU and memory balancing, power management, but if you're doing each of those things with a different tool, it's going to take you a long time," he adds. "VMware had a green field to make up its own more coherent approach to management; instead they put themselves in the same boat as the rest of the systems vendors, delivering a basket of tools, some with a common UI, that are not necessarily integrated, that have no semantics function that integrates the data into a coherent picture for systems management."

Big Four framework vendors delivered a "single pane of glass" view of the network, but they also dumped on users a "management nightmare," according to Kliger's blog on the topic.

A large company might use dozens of point products to manage its hardware, but most are meant simply to collect and deliver issues to network admins, who might or might not have the time to sift through the problems and create reports highlighting important points.

As data centers get more complicated, virtualization disperses responsibility for specific parts of the infrastructure according to functions or departments, rather than location of the servers, making the management nightmare worse.

It would be easy to brush Kliger off as a former executive who doesn’t like the direction being taken by the company that bought his brainchild. That might even be true to some extent. But he's not the kind of troll or yahoo who goes to a parade just to fling mud at people on the floats.

He is a former VP of architecture and applied research at EMC and founder/CTO of System Management ARTS--an innovative startup whose Smarts InCharge suite was designed to automatically find, inventory and identify developing problems in network devices to save admins the effort of extensive troubleshooting.

In 2002, when it won a Network Computing Editor's Choice award, SMARTS was one of only two systems management vendors offering Layer 2 network discovery; it was acquired by VMware parent company EMC in 2004.

Before SMARTS, Kliger was also a senior researcher at IBM and at Weizmann Institute of Science, where he got his master's degree and Ph.D. in computer science.

Kliger may be partisan on some systems management issues, but he's not an idiot. That doesn't mean he's right. It does mean that if he's not right, he at least isn't completely wrong.

IT infrastructures really are getting more complex as cloud and virtualization continue to reduce the importance of the physical characteristics of parts of that infrastructure and make traditional ways of measuring its capability according to the location of specific clusters or data centers irrelevant.

There's an argument to be made that low-level networking gear shouldn't need the intelligence to solve its own problems. It needs to be fast and communicative--shipping lots of data on traffic flow, application performance and other variables up the line to be analyzed by hardware with more intelligence.

Next: VMware Is After More Than Just Good Systems Management


Page:  1 | 2  | Next Page »


Related Reading


Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Research and Reports

Network Computing: April 2013



TechWeb Careers