Data centers

07:00 AM
Connect Directly
LinkedIn
RSS
E-Mail
50%
50%

Rethinking Data Center Design

With the skyrocketing number of connected devices and data processing requirements, data center operators are migrating to a new open architecture that's focused on virtualization.

By the end of the decade, the number of connected devices is expected to reach 50 billion. These billions of devices are generating a massive amount of data: It's estimated that, as early as 2017, 7.7 zettabytes of data will cross the network. This influx of data processing requirements represents a massive challenge for the data center ecosystem as operators abandon client-server and LAN architectures in favor of a design that emphasizes the use of virtualization in servers, storage, and networking.

Increasingly, companies are embracing a more flexible, open platform built on the technology pillars of mobile computing, cloud services, big data, and social networking. Trend setters such as Facebook are building megascale data centers to handle the tremendous bandwidth demands and workloads. Facebook has said it achieved $1.2 billion in savings as a result of its open-platform approach.

Many businesses and enterprises are embracing cloud computing, essentially buying compute capacity from a third party, saving them the capital and operating expenses of running their own data centers. As a result, cloud service providers are among the heaviest investors in open-platform, megascale data centers. Traditional server vendors, which provide high-level service but do so at a premium, are likely to face serious competition from open-platform vendors, which provide a less expensive, more flexible, and scalable infrastructure.

Using an open-platform approach means looking at a data center development project as a whole. Though servers are a core technology, it's important to look at the entire system of servers, storage, networking, and software together and take a fresh approach to how those components need to be better integrated to bring truly disruptive change to the data center.

Servers
An open-platform approach touches on more than just the server, but the server still plays a critical role in delivering the capacity, processing speed, and energy efficiency demanded of the next-generation data center. Servers must be built to house scores of virtual servers in one physical server in order to increase server utilization as virtualization becomes the norm. Servers need to be powered by multi-core processors that are both fast and energy efficient, and they must seamlessly interact with increasingly virtualized storage and networking systems.

Many semiconductor companies and server manufacturers are developing servers running on ARM-based processors instead of the industry standard x86 architecture. ARM processors are common in smartphones and in emerging devices as the Internet of Things trend takes hold, connecting home appliances, automobiles, and various sensors to the network. ARM is helping companies develop processors with innovative multi-core CPUs that deliver true server-class performance and offer best-in-class virtualized accelerators for networking, communications, big data, storage, and security applications.

Networking
The modern data center will also need faster network connectivity, replacing a gigabit Ethernet (GbE) connection with 10 GbE, 40 GbE, and eventually 100 GbE pipes. A 10 GbE fabric network -- on which traffic flows east and west as well as north and south -- promotes energy efficiency, manageability, and a flexible use of computing resources through network virtualization.

Simultaneously, a new Ethernet specification has been developed to improve the speed and lower the cost of Ethernet connectivity between the top-of-rack switch and the server network interface controller within a data center. A recently formed industry group (which includes Broadcom), the 25 Gigabit Ethernet Consortium, created the spec to allow data center networks to run over a 25 Gbit/s or 50 Gbit/s Ethernet link protocol.

The specification prescribes a single-lane 25 Gbit/s Ethernet and dual-lane 50 Gbit/s Ethernet link protocol, enabling up to 2.5X higher performance per physical lane or twinax copper wire between the rack endpoint and switch compared to current 10 Gbit/s and 40 Gbit/s Ethernet links. The Institute of Electrical and Electronics Engineers (IEEE), the governing body for Ethernet standards, is considering the technology for a potential future IEEE standard.

Storage
The modern data center built for the cloud also breaks new ground in storage technology with what's known as storage disaggregation. In recent years, storage was aggregated with compute within a server, so data could be retrieved from storage faster. When solid state disk (SSD) caught on as the new medium for storage, the storage in those servers got more expensive. But now that there are faster connections available between compute and storage, storage can once again be separated, or disaggregated and shared, from compute.

New interconnection technology can move data as fast as 40 Gbit/s -- and soon 100 Gbit/s -- with almost no latency. Disaggregation provides data center operators with more flexibility to upgrade or replace individual components instead of replacing entire systems. Organizations can leverage SSD storage for data that needs to be retrieved quickly, and they can use less expensive SAS and SATA drives for less urgent data.

These many technological changes may present challenges to data center managers more familiar with operating in the client-server physical hardware world, but they represent far more promising opportunities to develop more efficient and open megascale data centers that satisfy the growing demand for faster, higher-capacity computing.

Nicholas (Nick) Ilyadis serves as vice president and chief technical officer of Broadcom's Infrastructure and Networking Group (ING), where he is responsible for product strategy and initiatives for a broad portfolio of Ethernet networking products including automotive ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Page 1 / 2   >   >>
Jerome Amon
50%
50%
Jerome Amon,
User Rank: Apprentice
8/29/2014 | 5:49:10 PM
Re: Rethinking DC Design
Hello,

@ Aditshar1, ok i see. So i think it will be benefic for you to developp high skill into SDN, then lead all SDN based deployments and share with us your experience :). Actually, we don't have a plan for SDN, but i'm sure that it's coming soon because our big vendor have this solution (Software Defined Virtual Private Network -- SDVPN) which can increase the power of our actual transport network...

Thanks! 
aditshar1
50%
50%
aditshar1,
User Rank: Apprentice
8/24/2014 | 1:27:37 AM
Re: Rethinking DC Design
We are still at very early stage of deployment, this is fresh deployment. You can say new customer joining the market soon. as if now i don't think these guys have any plans for SDN. Yes 10 Gbe is favourite, Do you guys have any plan SDN and in case of yes from where do you plan start, i.e. is it you are upgrading your network with SDN or kind of new network.
Jerome Amon
50%
50%
Jerome Amon,
User Rank: Apprentice
8/23/2014 | 7:31:20 PM
Re: Rethinking DC Design
Hi Brian,

Good, i agree with you. Many possibilities exist  to achieved this security task -- that can be achieved by the application ( it can use a secured session or a secured proto) for example, and it also depends  of the network architecture (entreprise data center, hybrid cloud..). Yes, it depends !!!

Thanks! 
Jerome Amon
50%
50%
Jerome Amon,
User Rank: Apprentice
8/23/2014 | 6:31:06 PM
Re: Rethinking DC Design
Hello to all,

@Aditshar1, good.

We deployed 10Gbe, and it works well, with a hierachical arch and some redundant links! ( for now, i precise ).

Do you plan to use SDN approach in this part of the network ?

Thanks!

 
aditshar1
50%
50%
aditshar1,
User Rank: Apprentice
8/23/2014 | 2:16:24 PM
Re: Rethinking DC Design
@jerome: I work for transport, same for which you have worked. Do you think 40 Gbe emerging any time soon at your end.
Brian.Dean
50%
50%
Brian.Dean,
User Rank: Ninja
8/22/2014 | 8:44:20 PM
Re: Dell sees ARM servers coming but not soon
Great question @Marcia, my take is that so far the datacenter and the mobile world have managed to operate on different architectures because the interaction that takes place between the two are mostly on the portal and data level -- not on the software level.

But as these 50 billion devices come online and some of these devices have onboard computational power, for example, devices powered by the Arduino, Raspberry Pi and Intel's Galileo, etc. It will be interesting to see whether the ecosystem can operate with different architectures in an environment where software requires interoperability on the code level.

Emulation could enable a dual or multiple architecture ecosystem, but most of the times emulation is not efficient. For instance, the PS3's Power architecture can't be economically emulated on the x86 architecture. 

I feel that if interoperability becomes important then disruption would be on its way. ARM has a good presence at the device end and might expand to the datacenter. The Power architecture recently became an Open architecture, which could create flexibility. Intel is trying to make the x86 more appealing. Overall, interesting times!

 
Brian.Dean
50%
50%
Brian.Dean,
User Rank: Ninja
8/22/2014 | 7:57:44 PM
Re: Rethinking DC Design
@Jerome, good point about security. There are many layers of security that needs to be addressed. One area that comes to mind concerns data that is in transit from the end-device to the datacenter. One possible solution to tighten security in this area would be to have the end-device encrypt data before sending it forward. However, this would require that the end-device has a certain level of computational power available to it locally.
Jerome Amon
50%
50%
Jerome Amon,
User Rank: Apprentice
8/21/2014 | 4:37:15 PM
Re: Rethinking DC Design
Hi Aditshar1,

I've worked on 10Gbe (IP/MPLS backbone) but 40Gbe not yet!

Ok, good you work on which part of the 4G project (radio | transport | Core)?

Thanks

 
Jerome Amon
50%
50%
Jerome Amon,
User Rank: Apprentice
8/21/2014 | 4:33:20 PM
Re: Rethinking DC Design
Hello to all,

Ok, many  thanks Marcia for this precision!

I'm a new comer in the community, you know :)

But,  i think that talking about rethinking DC design nowadays implies also rethinking as describe in the article each layer of Data Center. Actualy, we talk about the concept of "Software Defined", (Soft Def Servers, Soft Def Network, Soft Def Storage). What could be the benefit of this approach ("Soft Def) in Data center ?

What we can also say about security in new modern data center?

Thanks!

 

 
aditshar1
50%
50%
aditshar1,
User Rank: Apprentice
8/21/2014 | 1:55:14 PM
Re: Rethinking DC Design
We still have network manufactured at 10 GBe, i guess 40 and 100 speed still need to wait ,anyone here working on 40 Gbe speed.
Page 1 / 2   >   >>
Cartoon
Slideshows
Audio Interviews
Archived Audio Interviews
Jeremy Schulman, founder of Schprockits, a network automation startup operating in stealth mode, joins us to explore whether networking professionals all need to learn programming in order to remain employed.
White Papers
Register for Network Computing Newsletters
Current Issue
2014 Private Cloud Survey
2014 Private Cloud Survey
Respondents are on a roll: 53% brought their private clouds from concept to production in less than one year, and 60% ­extend their clouds across multiple datacenters. But expertise is scarce, with 51% saying acquiring skilled employees is a roadblock.
Video
Twitter Feed