Data Center Dilemmas

Want to know the best way to start an internal data center or reorganize your existing hardware? Then read on...

September 27, 2005

4 Min Read
Network Computing logo

This column is in response to a series of questions from NDCF readers, one of whom is looking to start a small internal data center. The other reader, however, wants to know if there is a standard for organizing physical room space in a data enter, such as grouping specific servers together.

Starting a Data Center

Response: You will of course need to prepare an overall business and technical plan. In addition, you will need to carry out a full structural survey of the (candidate) facility to check for floor loading capability and ceiling height, as well as fully diverse entry points and capacities for electricity and telecoms services into the building. The Office of Government Commerce (OGC) IT Infrastructure Library (ITIL) recommendation, as described in the ICT Infrastructure Management best practice guidance, is a minimum of 5kN/m2 (Kilonewtons per square meter) floor loading capability and a slab-to-slab total floor height of 3.6 metres, permitting a floor void between fixed floor and false floor of 600 millimetres and a usable data center working height of three metres.

But when you come down to look at the nuts and bolts of the data center build-out, you might like to consider one of the modular racking systems that integrates power distribution, battery backup, air conditioning, and ventilation in an integral rack system. Power and cooling is distributed more efficiently amongst each cluster of racks and you only need to buy the amount of power and cooling distribution required for the expected power and thermal load. The operating costs therefore grow more linearly with the growth in the size of the server and networking system build-out, thereby yielding a quicker return on investment.

To check the airflow in your facility, you could make use of three-dimensional thermal analysis tools such as FLOVENT supplied by Flomerics. This type of analysis uses computational fluid dynamics modelling to predict thermal flows in a data center and visualise this using 3-D colour graphics. This helps in the planning of cold and hot aisles and the placement of air conditioning units either in parallel or perpendicular to the rack rows for optimum cooling. The Flomerics website provides useful links to papers by IBM Corp. (NYSE: IBM), Hewlett-Packard Co. (NYSE: HPQ), and others on the use of this tool on modelling rack inlet air temperatures in raised floor computer data centers.Organizing Physical Data Center Space

Response: There is no specific standard, since the requirements, resources, and constraints of each organisation are different. However, you might like to consider reviewing the best practice advice contained in the OGCs ITIL guidlelines. ITIL service management principles have been adopted globally by enterprises large and small to provide a framework for IT best practice. Again, you might want to consult the ICT Infrastructure Management publication, which is available in either book (ISBN 0113308655) or CD-ROM (CD 0113309031) format. Appendix D, “Environment Policies and Standards,” provides useful guidelines for major data centers, regional data centers, and server or network equipment rooms on power, humidity control, access, false floors, and other environmental factors.

Regarding equipment placement within the data center, you need to start with a scalable, resilient, and robust logical design for the server and networking equipment. This will then lend itself to mapping to the physical space available. Typical power dissipations for standard 6-foot equipment racks have steadily grown from 2kW to 10kW through the use of modern blade servers, and are set to grow further to 20kW per rack in the near future. Therefore, depending on your racking system and air handling capability, it may not be possible to fully load a rack with servers.

In any case, you should think about using dual-ported Network Interface Cards (NICs) for your servers and grouping these for load balancing. Each server in the group can then be dual-homed to two Local Area Network (LAN) access switches and/or server load balancers. You should split out your resilient LAN access switches and server load balancers to be placed at each end of the rack row containing the servers and to be fed from separate power feeds.

One or more pairs of resilient distribution LAN switches connect all of the access switches to provide front-end client-facing or back-end database connectivity. These should be placed in their own central rack row(s) and may be fiber or copper connected depending on distance requirements and cable run limitations. Again, fully resilient power feeds should be supplied. Check the loading on your power distribution systems to prevent overloading of any individual feed and the processes for adding new equipment to the data center. These should preferably be controlled through a formal change management process as outlined in the ITIL Service Support best practice guidance.— Hugh Lang, Practice Leader, Greenwich Technology Partners

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights