David Hill

Network Computing Blogger


Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

Viridity's EnergyCenter Brings Energy Management To The Data Center

Mark Twain famously said, "Everyone talks about the weather, but nobody does anything about it." Conversely, in IT everyone talks about data center energy efficiency, and many vendors are trying to do something about it. The introduction of Viridity's new EnergyCenter offers a particularly intriguing example. Viridity, a start-up company, tackles the data center energy-efficiency problem head on with EnergyCenter, a software approach to enabling data center energy optimization. Now corporate initiatives to go "green" are well and good, but there a couple of pragmatic business reasons why energy efficiency is being increasingly scrutinized within IT organizations.

First, the demand for computing and storage resources, which obviously require power, is expected to increase. However, expanding the physical footprint of a data center along with new workload-accommodating power infrastructures, including transformers, power distribution units, and backup generators, is a big no-no. When IT organizations are, at best, in budgetary cost-containment, as opposed to cost-reduction, mode, obtaining the capital expense (CAPEX) dollars for a data center upgrade solely for power reasons is likely to be a very hard sell. Then there is the price of electricity, which is not likely to go down. Just the opposite. The organizations that own the operating expense (OPEX) budget are likely feeling under increasing pressure because of upward-spiraling electricity costs, but that doesn't let IT off the hook, as the enterprise still has to pay the bill for inefficient energy management.
 
For energy management in the data center, having good information at the granular device level is necessary as is the analytical reporting capability to make that information actionable. For example, what would be the impact of a tech refresh with more energy-efficient equipment? Or what would be the energy/cost impact of virtual server-based consolidation, which powers down no-longer-needed physical servers?

Alas, gathering good power consumption information has not been easy. Faceplate information is simply not accurate enough. For example, power utilization and consumption are dynamic, not fixed, whereas faceplates assume a fixed use of energy. How can that be? One example is that modern processing chips use more or less energy depending upon workloads.

To address this point, Viridity's EnergyCenter monitors utilization and maps that to energy consumption over time in order to create a more accurate power utilization profile. This, in turn, can be used for analysis, such as identifying underutilized IT equipment and planning when and why to make that aforementioned tech refresh.

A better way than faceplates is to use physical sensors to collect information in real time. This would make possible the use of continuous information, and it is the most accurate way of gauging energy usage. However, sensors tend to be expensive and intrusive and could cause management problems, such as trying to keep track of configuration changes and updating the sensor network, as well.


Page:  1 | 2  | Next Page »


Related Reading


More Insights


Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Research and Reports

Network Computing: April 2013



TechWeb Careers