Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

The Idle Cycle Conundrum: Page 2 of 2

One example of a repurpose use would be Hadoop. Hadoop is designed to process large data sets, or big data, in parallel for increased speeds and reduced costs. To accomplish this, Hadoop spreads the processing and data across multiple nodes, from a few to upward of 6.000. Data files are fed into the Hadoop cluster, and processing functions are then run on those data sets.

Utilizing a small number of permanently dedicated resources--primarily Name and Job Tracker nodes, with a few slave nodes--idle resources could be added to the cluster when available. This would allow scalable processing of things like customer or Web traffic data, and so on.

This particular use case would be most relevant in environments with more seasonal peaks, weekly or monthly, due to redistribution of data across the cluster as nodes are added/removed. Additionally, fine tuning would be required to optimize for the architecture, but with the right data sets and resources this would allow data to be processed that might otherwise be neglected due to other priorities.

Hadoop is just one hypothetical example of services or applications that can be run using idle CPU cycles on an enterprise private cloud; many more exist. Remember, a private cloud should provide your IT staff with more time to work on strategic initiatives rather than tactical tasks, an allow them to use that time to optimize efficiencies and support IT’s customer, the business.