Data centers

08:00 AM
Shreyas Shah
Shreyas Shah
Commentary
Connect Directly
Twitter
LinkedIn
RSS
E-Mail
50%
50%

Ethernet Innovation Inside The Datacenter

With big data analytics, social media, and other new applications pushing east-west bandwidth demand, datacenters are deploying faster Ethernet and unlocking new capabilities.

Ethernet has become a ubiquitous interconnect technology to provide communication from machines to machines and machines to humans. With its simplicity and enhanced capabilities, Ethernet provides services to different markets -- from homes, transport networks, and medical machines to wired and wireless networks, telecom networks, and datacenters.  

Within datacenters, Ethernet has emerged as the de facto interconnect fabric to connect servers and networking and storage equipment. Datacenters are quickly ratcheting through the technology’s standardized speed steps to accommodate unprecedented bandwidth demands. After years of 1 Gbit/s Ethernet (GbE) connections linking servers, datacenters are deploying 10GbE and moving to 40GbE today. And real-world deployment of 100GbE is clearly on the horizon.

Datacenter operators are finding inventive ways to cost-effectively migrate up the ladder of the IEEE 802.32012 “Standard for Ethernet” speeds and, along the way, enabling new breakthrough capabilities in datacenters around the globe.

Application-driven innovation
Traditionally, the datacenter serves clients, and most of the traffic is from client-server traffic (also called “north-south traffic”). Due to virtualization, social media, and information sharing sites, server-to-server communication traffic (“east-west traffic”) has increased multifold.

A variety of application trends is converging to drive the surge in east-west traffic -- traffic either within datacenters or from one datacenter to the next. This, of course, has dramatic ramifications for the topology of networking infrastructure and interconnect requirements. In today’s datacenter, high-speed connectivity between servers is a necessity, as evidenced in the global push toward 40GbE deployments today and burgeoning interest in 100GbE, eventually moving toward 400GbE.

There is a multitude of applications fueling demand for east-west bandwidth across datacenters:

  • Virtual machine (VM) migration/server virtualization -- The notion of virtualization has appealed to datacenters for many years. Today, large-scale VM migration between servers has emerged as a huge consumer of bandwidth. Ethernet, the lowest-cost, standard way of communicating between servers in terms of cost per Gbit/s, is playing a huge role to support VM migration.
  • Network virtualization -- Network virtualization and virtual extensible local area networks (VXLANs) increase agility of the network infrastructure and the capabilities of Ethernet fabrics in datacenters supporting up to 16 million virtual servers. Cloud-scale datacenters are implementing virtualized servers with Ethernet fabrics supporting VXLAN to extend the virtual LAN of 24 million inside the datacenter from 4,000 virtual LANs a couple of years ago. Network virtualization and Ethernet fabrics place increasing demand for the bandwidth among virtual servers and the most secured environment -- hence, the need for higher-speed Ethernet.
  • Social media -- Some social media sites, such as Facebook, are driving tremendous interconnect bandwidth demands inside datacenters. One social media page can be composed of content driven by a number of databases that all must be accessed and enabled to talk to each other to complete the request and present the page correctly. Furthermore, each user is a creator, as well as a consumer, of content, so uploading of personal pictures and videos also adds to the burden. Consequently, the data transfer among social media servers at peak moments of usage can drive substantial bandwidth requirements for interconnections within the datacenter.
  • Video on demand -- Netflix and other video-on-demand sites are driving a tsunami of video data traffic over telecom networks from their datacenters. This has increased the amount of traffic that is communicated among multiple datacenter sites, as well as inside the datacenters among the multiple machines. It's also increased the demand for Ethernet speeds from 40GbE to 400GbE to transfer real-time video to consumers of television, tablets, and smartphones.
  • Big data analytics -- Big data analytics and targeted, “just-in-time” advertising are big consumers of bandwidth inside datacenters, moving data among servers and between servers and storage.
  • Internet of Things (IoT) -- IDC reported last October that it expects the installed base of the Internet of Things will be approximately 212 billion "things" globally by the end of 2020. This will include 30.1 billion installed "connected (autonomous) things" in 2020, according to IDC. Cars, refrigerators, and a bunch of other devices will come online via Ethernet as a means of connecting these devices to the Internet, and datacenters around the globe will serve and control these devices.
  • Backup inside datacenters -- Over the last decade, datacenters have adopted increasingly powerful backup applications as the varied and potentially devastating costs of network downtime have become better understood. The trend has steadily grown the amount of traffic exchanged between servers and storage. The need for Ethernet fabrics is more important than ever to support the very high volume of storage traffic over Ethernet inside datacenters, as well as to back up datacenters.

New capabilities
Out of these market-driven application trends is growing a host of new Ethernet and other capabilities unlocked by the increased bandwidth for datacenter interconnection.

With today’s migration to 40GbE server interconnects, the industry serving datacenters has developed 4x10G Ethernet breakout cables that package within a single, short-reach module four copper or optical fiber cables, each delivering 10G, for a total of 40G. The same breakout strategy is likely to be used to eventually support 100GbE and 400GbE server interconnects, offering datacenters an appealing and cost-effective migration strategy into the higher speeds.

Another emerging capability that is gaining traction among datacenters because of the deployment of higher-speed interconnects is stateless computing. It's a concept that has been talked about for a decade or more, but never saw widespread deployment among datacenters, largely because interconnect bandwidths were lacking.

In stateless computing, the server is nothing but a central processing unit (CPU) and memory; the unique software configurations and other states of the server are stored elsewhere. The server -- a cartridge-form blade inside a core switch -- performs only computations, leveraging the software configurations and other states from other devices. Stateless computing is a scenario that enhances security and reduces maintenance burden, all predicated on very-high-speed interconnections among servers and the other devices where the other states reside.

The stateless computing model, furthermore, translates into a valuable series of related capabilities. Fabric-attached storage (FAS), for example, can deliver significant management and cost efficiencies, as CPUs are on one side of the fabric and storage is concentrated into a high-availability cluster on the other. Object-oriented storage, often implemented with FAS, also is catching on. Data is managed as objects (instead of files or blocks), with meta-data descriptions of the contents for improved indexing and management, and reduced administration complexity.

Innovation has been nonstop in the IEEE 802.3 Ethernet Working Group, as well. Key developments include: 10GBASE-T, standardized in IEEE 802.3an-2006 to support 10GbE connectivity over unshielded or shielded twisted pair cables of distances up to 100 meters; 40G and 100G backplane Ethernet, simultaneously standardized in IEEE 802.3ba-2010 to simplify complex link aggregation schema and pave the way for a new generation of high-rate server connectivity and core switching; and optional Energy Efficient Ethernet (EEE), an option for reducing power consumption in datacenters based upon systems utilization.

Market-driven Ethernet innovation never stops -- typically inside-out, from core to edge, across the network. Hallmarks of the technology’s 40-plus years of history have always been cost-effective migration paths, simplicity, and advanced capabilities as well as backwards compatibility with legacy implementations. Such characteristics are certainly in evidence in Ethernet’s deployment for server interconnection and Ethernet fabrics inside today's datacenters.

Shreyas Shah has more than 20 years of experience in designing ASICs, FPGAs, and systems. He has held various engineering positions in design and architecture in wired communications and has focused on datacenters for the last 10+ years. Currently he is a datacenter system architect with Xilinx. Previously he was founder and CTO of Xsigo systems. Xilinx is the member of the Ethernet Alliance.

The Ethernet Alliance is a global, non-profit, industry consortium of member organizations that are dedicated to the continued success and advancement of Ethernet technologies. The Ethernet Alliance serves as the premier global repository for all things ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
AbeG
50%
50%
AbeG,
User Rank: Black Belt
5/30/2014 | 11:31:22 PM
Re: What's next?
@MarciaNWC - Thanks for sharing that link. 

I've read discussions about the relative merits of building datacenters in certain areas whether to take advantage of grants, tax incentives, local talent pool, and weather conditions.  I didn't realize that some organizations have already taken steps to take advantage of weather conditions in certain areas.
jgherbert
50%
50%
jgherbert,
User Rank: Ninja
5/29/2014 | 10:09:20 PM
Re: What's next?
Fascinating. While the sun is a great asset from an energy perspective, you'd also think that the desert heat was undesirable for a data center. That said, if I recall correctly, google has been leading the charge to 'hot' data centers, with the Belgian DC having no cooling capacity at all. When it gets too hot they move traffic elsewhere and stop the workers from going in there.

Walking past the open doors of what I understand may be a smaller Google DC in the Atlanta area, and feeling the blast of heat emanating from within, I believe this may be in practice elsewhere too ;-)
jgherbert
50%
50%
jgherbert,
User Rank: Ninja
5/29/2014 | 10:06:53 PM
Wait, what?
"Ethernet has become a ubiquitous interconnect technology to provide communication from machines to machines and machines to humans"

Do you have a "port" that I don't? I'm confused where humans fit into this equation. Maybe I need to look at Human 2.0 or something ;-)(
MarciaNWC
50%
50%
MarciaNWC,
User Rank: Strategist
5/13/2014 | 6:20:37 PM
Re: What's next?
Interesting Apple datacenter story Brian, thanks for sharing that link. The various datacenter efficiency strategies that companies are developing in these extreme climates are intriguing.
Brian.Dean
50%
50%
Brian.Dean,
User Rank: Ninja
5/13/2014 | 5:22:01 PM
Re: What's next?
It is extremely interesting to ponder upon the connectivity requirements of the future. I think, once 40, 100 and 400GbE has been exhausted, then firms will look towards FCoE to deliver greater speeds, or maybe a wireless technology emerges that is faster and cheaper.
Brian.Dean
50%
50%
Brian.Dean,
User Rank: Ninja
5/13/2014 | 5:02:43 PM
Re: What's next?
Marcia, thanks for the link, it's a good read because I think we will be hearing about many more data centers moving to extreme locations in the years to come. Apple has setup a datacenter in Nevada, currently the datacenter uses 94% of its energy requirement from renewable energy. Neighboring state Utah also has datacenters to take advantage of desert sun.

If I recall correctly, one estimate puts the entire energy usage of the Cloud at the 6th place relative to individual economies, Germany uses less power than the Cloud and would be at the 7th place in the same relative scale, and IDC estimates that the Cloud will grow from 4.4 Zettabytes of data at present to around 44 Zettabytes by 2020. Granted, the amount of data and its growth will not be perfectly correlated with energy consumption needs, but I imagine it will have an effect -- causing firms to distribute their datacenters with better communication technology.    
MarciaNWC
50%
50%
MarciaNWC,
User Rank: Strategist
5/13/2014 | 11:23:52 AM
Re: What's next?
Interesting point Brian. We'd done some coverage of new datacenters in cold climates, but I hadn't hear about the datacenters in deserts. Do you have any other information on that?
aditshar1
50%
50%
aditshar1,
User Rank: Apprentice
5/13/2014 | 3:21:48 AM
Re: What's next?
Whats next sound intersting even i am curious about this. Although couple of days back i read on some site that 40 GbE for 4 pair cabling with 2 connectors over 30m distances is scheduled to be next devolopment.
Brian.Dean
50%
50%
Brian.Dean,
User Rank: Ninja
5/12/2014 | 9:16:14 PM
Re: What's next?
Another driver behind the demand for greater Ethernet speeds has to do with energy, the amount that a datacenter consumes and ways by which a datacenter center can be made efficient.

Facebook has a datacenter close to the arctic cycle to take advantage of the naturally occurring cold climate to cool their datacenter and not use expensive fuels, other firms place their datacenters in deserts to extract the highest possible returns from solar energy, and some datacenters need to be strategically placed so that the lowest possible latency rate is gained, for instance, Algo-trading. All this is great, but is requires Ethernet to evolve at a faster rate.
MarciaNWC
50%
50%
MarciaNWC,
User Rank: Strategist
5/12/2014 | 6:56:37 PM
What's next?
Hi Shreyas -- Interesting developments here. Can you provide some insight into what we might expect to see next from the IEEE 802.3 Ethernet Working Group, or new areas of focus for the group?
Cartoon
Hot Topics
4
IT Certification's Top 10 Benefits
Global Knowledge, Global Knowledge,  8/20/2014
3
Rethinking Data Center Design
Nicholas Ilyadis, Vice President and Chief Technical Officer, Infrastructure & Networking Group, Broadcom Corporation,  8/20/2014
White Papers
Register for Network Computing Newsletters
Current Issue
2014 Private Cloud Survey
2014 Private Cloud Survey
Respondents are on a roll: 53% brought their private clouds from concept to production in less than one year, and 60% ­extend their clouds across multiple datacenters. But expertise is scarce, with 51% saying acquiring skilled employees is a roadblock.
Video
Slideshows
Twitter Feed