• 08/20/2014
    7:00 AM
  • Rating: 
    0 votes
    Vote up!
    Vote down!

Rethinking Data Center Design

With the skyrocketing number of connected devices and data processing requirements, data center operators are migrating to a new open architecture that's focused on virtualization.

By the end of the decade, the number of connected devices is expected to reach 50 billion. These billions of devices are generating a massive amount of data: It's estimated that, as early as 2017, 7.7 zettabytes of data will cross the network. This influx of data processing requirements represents a massive challenge for the data center ecosystem as operators abandon client-server and LAN architectures in favor of a design that emphasizes the use of virtualization in servers, storage, and networking.

Increasingly, companies are embracing a more flexible, open platform built on the technology pillars of mobile computing, cloud services, big data, and social networking. Trend setters such as Facebook are building megascale data centers to handle the tremendous bandwidth demands and workloads. Facebook has said it achieved $1.2 billion in savings as a result of its open-platform approach.

Many businesses and enterprises are embracing cloud computing, essentially buying compute capacity from a third party, saving them the capital and operating expenses of running their own data centers. As a result, cloud service providers are among the heaviest investors in open-platform, megascale data centers. Traditional server vendors, which provide high-level service but do so at a premium, are likely to face serious competition from open-platform vendors, which provide a less expensive, more flexible, and scalable infrastructure.

Using an open-platform approach means looking at a data center development project as a whole. Though servers are a core technology, it's important to look at the entire system of servers, storage, networking, and software together and take a fresh approach to how those components need to be better integrated to bring truly disruptive change to the data center.

An open-platform approach touches on more than just the server, but the server still plays a critical role in delivering the capacity, processing speed, and energy efficiency demanded of the next-generation data center. Servers must be built to house scores of virtual servers in one physical server in order to increase server utilization as virtualization becomes the norm. Servers need to be powered by multi-core processors that are both fast and energy efficient, and they must seamlessly interact with increasingly virtualized storage and networking systems.

Many semiconductor companies and server manufacturers are developing servers running on ARM-based processors instead of the industry standard x86 architecture. ARM processors are common in smartphones and in emerging devices as the Internet of Things trend takes hold, connecting home appliances, automobiles, and various sensors to the network. ARM is helping companies develop processors with innovative multi-core CPUs that deliver true server-class performance and offer best-in-class virtualized accelerators for networking, communications, big data, storage, and security applications.

The modern data center will also need faster network connectivity, replacing a gigabit Ethernet (GbE) connection with 10 GbE, 40 GbE, and eventually 100 GbE pipes. A 10 GbE fabric network -- on which traffic flows east and west as well as north and south -- promotes energy efficiency, manageability, and a flexible use of computing resources through network virtualization.

Simultaneously, a new Ethernet specification has been developed to improve the speed and lower the cost of Ethernet connectivity between the top-of-rack switch and the server network interface controller within a data center. A recently formed industry group (which includes Broadcom), the 25 Gigabit Ethernet Consortium, created the spec to allow data center networks to run over a 25 Gbit/s or 50 Gbit/s Ethernet link protocol.

The specification prescribes a single-lane 25 Gbit/s Ethernet and dual-lane 50 Gbit/s Ethernet link protocol, enabling up to 2.5X higher performance per physical lane or twinax copper wire between the rack endpoint and switch compared to current 10 Gbit/s and 40 Gbit/s Ethernet links. The Institute of Electrical and Electronics Engineers (IEEE), the governing body for Ethernet standards, is considering the technology for a potential future IEEE standard.

The modern data center built for the cloud also breaks new ground in storage technology with what's known as storage disaggregation. In recent years, storage was aggregated with compute within a server, so data could be retrieved from storage faster. When solid state disk (SSD) caught on as the new medium for storage, the storage in those servers got more expensive. But now that there are faster connections available between compute and storage, storage can once again be separated, or disaggregated and shared, from compute.

New interconnection technology can move data as fast as 40 Gbit/s -- and soon 100 Gbit/s -- with almost no latency. Disaggregation provides data center operators with more flexibility to upgrade or replace individual components instead of replacing entire systems. Organizations can leverage SSD storage for data that needs to be retrieved quickly, and they can use less expensive SAS and SATA drives for less urgent data.

These many technological changes may present challenges to data center managers more familiar with operating in the client-server physical hardware world, but they represent far more promising opportunities to develop more efficient and open megascale data centers that satisfy the growing demand for faster, higher-capacity computing.


Rethinking DC Design

Hello Nick,

Thanks for this very important topic and your developpement!

Yes, design modern DC now need more skills and knownledge! 

Basically, the three layers in DC concern Servers , Networking  and Storage.

But, the new design must also take into account SDN, so could you tell us your point of view ?

And, with all these devices connected and all information stored (-- cloud services), security becomes more and more important; and i think security must be a specific point on which we must focus! Could you tell us something?

We can already remark this fact when cisco is trying to reorganise i workforce and plan to focus on security, Software and UC.


Re: Rethinking DC Design

Hi Jerome -- Nick's original article, which was edited for length purposes, included some thoughts on security: "Security technology is another essential component of open platform data center designs, not the afterthought it had been in the past. Networks require deep packet inspection to screen for anomalies, but must be able to do so at gigabit speeds. Other network components that have to have security capabilities include edge routers, firewalls, residential gateways and virtual private network (VPN) appliances."

Security has to be provided throughout the network without compromising speed and performance, he added.

Re: Rethinking DC Design
We still have network manufactured at 10 GBe, i guess 40 and 100 speed still need to wait ,anyone here working on 40 Gbe speed.
Re: Rethinking DC Design

Hi Aditshar1,

I've worked on 10Gbe (IP/MPLS backbone) but 40Gbe not yet!

Ok, good you work on which part of the 4G project (radio | transport | Core)?



Re: Rethinking DC Design
@jerome: I work for transport, same for which you have worked. Do you think 40 Gbe emerging any time soon at your end.
Re: Rethinking DC Design

Hello to all,

@Aditshar1, good.

We deployed 10Gbe, and it works well, with a hierachical arch and some redundant links! ( for now, i precise ).

Do you plan to use SDN approach in this part of the network ?



Re: Rethinking DC Design
We are still at very early stage of deployment, this is fresh deployment. You can say new customer joining the market soon. as if now i don't think these guys have any plans for SDN. Yes 10 Gbe is favourite, Do you guys have any plan SDN and in case of yes from where do you plan start, i.e. is it you are upgrading your network with SDN or kind of new network.
Re: Rethinking DC Design


@ Aditshar1, ok i see. So i think it will be benefic for you to developp high skill into SDN, then lead all SDN based deployments and share with us your experience :). Actually, we don't have a plan for SDN, but i'm sure that it's coming soon because our big vendor have this solution (Software Defined Virtual Private Network -- SDVPN) which can increase the power of our actual transport network...


Re: Rethinking DC Design

Hello to all,

Ok, many  thanks Marcia for this precision!

I'm a new comer in the community, you know :)

But,  i think that talking about rethinking DC design nowadays implies also rethinking as describe in the article each layer of Data Center. Actualy, we talk about the concept of "Software Defined", (Soft Def Servers, Soft Def Network, Soft Def Storage). What could be the benefit of this approach ("Soft Def) in Data center ?

What we can also say about security in new modern data center?




Re: Rethinking DC Design

@Jerome, good point about security. There are many layers of security that needs to be addressed. One area that comes to mind concerns data that is in transit from the end-device to the datacenter. One possible solution to tighten security in this area would be to have the end-device encrypt data before sending it forward. However, this would require that the end-device has a certain level of computational power available to it locally.

Re: Rethinking DC Design

Hi Brian,

Good, i agree with you. Many possibilities exist  to achieved this security task -- that can be achieved by the application ( it can use a secured session or a secured proto) for example, and it also depends  of the network architecture (entreprise data center, hybrid cloud..). Yes, it depends !!!



ARM architecture is moving really fast to the server scene. Companies such as nVidia and Qualcomm are very active in this market.

The architecture is designed for low power use, fast processing, excelent for vistualization and massive data analysis.

Dell sees ARM servers coming but not soon

According to Dell, there is still a lot work to be done before ARM servers can easily substitute x86 based units, mostly because the lack of software. But the trend is there and we'll see more ARM based servers in the future.

"There will definitely be [ARM] server products shipping this year and a reasonable number next year, but it won't really begin to ramp until 2016," Forrest Norrod, general manager of Dell Inc.'s server group, told EE Times.

Re: Dell sees ARM servers coming but not soon

Thanks for providing those details Pablo. How much potential do you think ARM has to displace x86 in the server market?

Re: Dell sees ARM servers coming but not soon

Great question @Marcia, my take is that so far the datacenter and the mobile world have managed to operate on different architectures because the interaction that takes place between the two are mostly on the portal and data level -- not on the software level.

But as these 50 billion devices come online and some of these devices have onboard computational power, for example, devices powered by the Arduino, Raspberry Pi and Intel's Galileo, etc. It will be interesting to see whether the ecosystem can operate with different architectures in an environment where software requires interoperability on the code level.

Emulation could enable a dual or multiple architecture ecosystem, but most of the times emulation is not efficient. For instance, the PS3's Power architecture can't be economically emulated on the x86 architecture. 

I feel that if interoperability becomes important then disruption would be on its way. ARM has a good presence at the device end and might expand to the datacenter. The Power architecture recently became an Open architecture, which could create flexibility. Intel is trying to make the x86 more appealing. Overall, interesting times!