Our current compute model has opened up numerous new possibilities into data delivery, content management, and overall improvements in user experience. The reality is simple: If you’re carrying a smart device, you’re probably connected into a data center somewhere. This certainly wasn’t the case a few years ago. Users are demanding a continuous flow of information to any device, anytime and anywhere. Mobility has become the new normal.
The latest Cisco Global Cloud Index shows the impact of this type of data growth:
• Annual global data center IP traffic will reach 7.7 zettabytes by the end of 2017. By 2017, global data center IP traffic will reach 644 exabytes per month (up from 214 exabytes per month in 2012).
• Global data center IP traffic will nearly triple over the next five years. Overall, data center IP traffic will grow at a compound annual growth rate (CAGR) of 25% from 2012 to 2017.
Remember, this traffic isn’t only WAN-based. There is direct growth from inter-data center traffic as well. In fact, the same Cisco report indicates that the portion of traffic residing within the data center will remain the majority throughout the forecast period, accounting for 76% of data center traffic in both 2012 and 2017. Factors contributing to traffic remaining in the data center include functional separation of application servers, storage, and databases, which generates replication, backup, and read/write traffic traversing the data center. Furthermore, parallel processing divides tasks and sends them to multiple servers, contributing to internal data center traffic.
So how can the next-generation data center model better control this influx of traffic? Moreover, how do you continue to prevent infrastructure or vendor lock down? And finally, can you continue to leverage hardware and software optimizations?
The answer really revolves around what’s happening around infrastructure optimizations as well as the newly abstracted layer within the data center. The next-gen data center model will revolve around more open technologies. Typical proprietary stacks are now creating API tie-ins to allow customers the ability to integrate with outside technologies.
As a result, the data center itself is becoming much more abstracted. Enterprises want a more open cloud, virtualization, and workload delivery infrastructure. They need their data centers to communicate regardless of the underlying hardware platform. That means an agnostic data center won’t really care what type of hardware you deploy -- only that you intelligently present resources to the software-defined stack or management layer.
[Read how more companies are expected to consider software-defined data center elements this year in "3 Data Center Trends To Watch In 2014."]
So what does the agnostic data center really look like? Here are some characteristics:
• Software-defined technologies. The virtual layer has come a really long way. In fact, logical abstraction of key data center services is happening everywhere. Storage, networking, security, compute and even the data center layer itself are all now a part of the SDX stack. The logical layer is able to look, agnostically, at physical resources and utilize those resources regardless of the backend manufacturer.
• Next-generation data center management (DCIM, DCOS). The level of data center interconnectivity required new levels of data center management. Just look at how far DCIM has come. Now, there are even data center operating systems aiming to connect everything from chips to cooling.
• Automation (logical and physical). Think you won’t see a robot in your data center? Think again. There’s a reason that Google has recently acquired seven different robotics tech firms. Plus, data center workflow orchestration and infrastructure automation are creating a much more lean and intelligent data center platform.
• Creating the agnostic cloud. Public, hybrid, community, or distributed cloud -- does it matter? Data center platforms are being designed to control and manage cloud platforms at a truly agnostic level. Cloud management and data center controls are becoming intertwined where your cloud just becomes a powerful extension of your environment. From there, all resources are pooled into a powerful management layer.
The data center model will continue to evolve and will be tasked with supporting even newer technologies. Infrastructures will be tasked with creating robust platforms capable of aligning with the goals of the business organization. The future data center will attract a lot of new users and customers if it can provide a truly agnostic development and growth infrastructure.
How do you think data centers will evolve? Please share your thoughts in the comment section below.Bill is an enthusiastic technologist with experience in datacenter design, management, and deployment. His architecture work includes large virtualization and cloud deployments as well as business network design and implementation. Bill enjoys writing, blogging, and educating ... View Full Bio