Joe Onisick


Upcoming Events

A Network Computing Webcast:
SSDs and New Storage Options in the Data Center

March 13, 2013
11:00 AM PT / 2:00 PM ET

Solid state is showing up at every level of the storage stack -- as a memory cache, an auxiliary storage tier for hot data that's automatically shuttled between flash and mechanical disk, even as dedicated primary storage, so-called Tier 0. But if funds are limited, where should you use solid state to get the best bang for the buck? In this Network Computing webcast, we'll discuss various deployment options.

Register Now!


Interop Las Vegas 2013
May 6-10, 2013
Mandalay Bay Conference Center
Las Vegas

Attend Interop Las Vegas 2013 and get access to 125+ workshops and conference classes, 350+ exhibiting companies and the latest tech.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

Better Data Center Standardization Through Pod Architecture Design

The traditional piecemeal method of data center hardware acquisition is inefficient, error-prone and costly to maintain. Moving the data center forward toward private cloud architectures requires new thinking, planning and standardization. One proven method for this is pod architecture design.

In this sense, a pod is a set of define compute, network and storage resources. Pods can be designed with expandability options for compute and storage, or can be fixed based on a set compute capacity. Overall, pod designs provide tighter integration and better standardization across the infrastructure board.

More Insights

Webcasts

More >>

White Papers

More >>

Reports

More >>

One of the key advantages of pod architecture is the tight integration among components. Historically, IT infrastructure has been purchased in disparate refresh cycles and chosen by separate teams. This leads to compatibility complications and, potentially, sub-optimal designs used as workarounds. Designing these solutions with a holistic approach allows feature sets to be tightly coupled, enabling maximum value from the infrastructure as a whole.

Another major benefit is the operational cost savings gained from the standardization. Using a pod infrastructure as your building block for the data center allows your IT staff to focus on one set of hardware/software components. This focus will increase the depth of understanding possible, as well as reduce the learning curve.

The core components of a pod architecture are compute, network and storage. Additionally, security appliances (virtual or physical) may be included. Another component typically factored in is automation/orchestration software. While this won't be repeated with each pod, the underlying pod hardware may effect automation/orchestration decisions.

The pod should be designed in repeatable chunks for growth purposes. This means you'll need to properly assess your resource consumption model. Let's take a look at two examples:

A smaller organization may be able to fit into a single pod architecture for the foreseeable future. In these cases you'll want to design compute and storage expansion options that will maintain standardization. Blades will typically be a good fit here because the networking is expanded by default with each chassis. For rack-mount servers, your compute expansion option may include top-of-rack switching or additional blades for a director-class switch.

In a larger organization, the pod itself should typically be the expansion unit, although half-pod options aren't unheard of. You may also design expansion options for compute and storage for the event that one is maxed out while the other has resource availability. Overall, the idea should be to get as close to purchasing a new uniform pod each time you hit set utilization levels.

This consumption model helps with not only deployability ans operability, but also with maintaining the books from a finance perspective. The cost becomes very predictable for adding IT resources, and with steady growth it will be simple to plan for. Even with unpredictable growth, it's beneficial to know the complete cost well in advance.

In both scenarios, you'll want to factor in things like racks and cable management. These should be standardized as well, and often make a great pod boundary (as in one pod per rack). Another factor you'll want to consider when designing your pod is power and cooling--especially with enclosed or in-row systems. You'll want to tailor your pod design to a logical expansion point for your HVAC design.

Building a data center based on a pod design alleviates the commonly found Frankenstein method of piecing together disparate systems. This provides a more robust and stable platform through tighter integration and testing. With each expansion, the IT team will know the hang-ups in advance. Additionally, pods offer financial and operational benefits.

Joe Onisick is the Founder of Define the Cloud. You can follow his angry rants at http://www.definethecloud.net or on Twitter @jonisick.


Related Reading


Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 
IaaS Providers
Cloud Computing Comparison
With 17 top vendors and features matrixes covering more than 60 decision points, this is your one-stop shop for an IaaS shortlist.
IaaS Providers

Research and Reports

The Virtual Network
February 2013

Network Computing: February 2013

Upcoming Events



TechWeb Careers