Interop ITX Infrastructure Chair Keith Townsend provides guidance on hyperconvergence, cloud migration, network disaggregation, and containers.
Enterprise infrastructure teams are under massive pressure as the cloud continues to upend traditional IT architectures and ways of providing service to the business. Companies are on a quest to reap the speed and agility benefits of cloud and automation, and infrastructure pros must keep up.
In this rapidly changing IT environment, new technologies are challenging the status quo. Traditional gear such as dedicated servers, storage arrays, and network hardware still have their place, but companies are increasingly looking to the cloud, automation, and software-defined technologies to pursue their digital initiatives.
According to IDC, by 2020, the heavy workload demands of next-generation applications and IT architectures will have forced 55% of enterprises to modernize their data center assets by updating their existing facilities or deploying new facilities.
Moreover, by the end of next year, the need for better agility and manageability will lead companies focused on digital transformation to migrate more than 50% of their IT infrastructure in their data center and edge locations to a software-defined model, IDC predicts. This shift will speed adoption of advanced architectures such as containers, analysts said.
Keith Townsend, founder of The CTO Advisor and Interop ITX Infrastructure Track Chair, keeps a close eye the evolution of IT infrastructure. On the next pages, read his advice on what he sees as the top technologies and trends for infrastructure pros today: hyperconvergence, network disaggregation, cloud migration strategies, and new abstraction layers such as containers.
(Image: Timofeev Vladimir/Shutterstock)
Get live advice on networking, storage, and data center technologies to build the foundation to support software-driven IT and the cloud. Attend the Infrastructure Track at Interop ITX, April 30-May 4, 2018. Register now!
Hyperconvergence marketing vs. reality
Hyperconverged infrastructure is often touted as saving companies a lot of money compared to traditional three-tier architecture of separate, best-of-breed networking, storage and compute components. Vendors argue that traditional storage arrays are time-consuming, complex to maintain, and require too much manpower.
But storage management has evolved to become much simpler, Townsend said. "Even the most advanced storage arrays don't take much storage training…That administrative burden has certainly lightened."
(Image: Maksim Kabakou/Shutterstock)
Initially, the cost of hyperconverged infrastructure (HCI) may be cheaper than a traditional architecture since a company can start with three to four systems, Townsend said. However, as an environment grows, the price becomes similar -- if not more -- than a three-tier model, from a capital expenditure perspective.
"There are instances when HCI is a clear choice when it comes to saving costs for smaller environments," Townsend said. "But for large environments, cost is probably not the determining factor."
(Image: Javier Brosch/Shutterstock)
Traditional networking vs. disaggregation
In traditional networking equipment like switches, the operating system is tightly coupled with the hardware. Townsend pointed to Cisco as an example. Now, however, some networking vendors such as Juniper Networks and Dell Technologies have embraced the concept of separating the NOS from hardware by making it possible to run the NOS on a white-box switch. This is called disaggregation or open networking, a movement in which hyperscale companies such as Facebook have led the way.
From Townsend's perspective, network disaggregation is an exciting trend for enterprises and service providers alike. "This brings an enormous amount of capability to the average enterprise or telco," he said, explaining that it allows a company to install only necessary services, which reduces support issues.
"There have been many times where we've had to upgrade an entire switch fabric or wide-area network because there was some vulnerability in a service we didn't even use, but we couldn't disable or uninstall it because of the tightly couple nature of the hardware and software," he said.
Another benefit of disaggregation is the ability to incorporate DevOps principals to networking, Townsend said. Since most of the open networking platforms are based on Linux, a company can apply continuous development processes to managing the network.
"But most of the value comes from being able to deploy what you need on the hardware platforms that best fit your use case," he added.
Lift and shift
As enterprises increasingly adopted virtualization, the "lift and shift" strategy became popular, according to Townsend. Companies simply moved their applications from physical hardware to virtual machines. Generally, operations didn't change; they could run the same tools and processes in the new environment as the old one. "The temptation is to do the same thing with public cloud," Townsend said.
"A virtual machine is a virtual machine instance whether it's running in your private data center or running in a public cloud. Technically, the workload should work," he said.
So what's the problem?
No. 1 lift and shift problem: cost
The biggest concern with this approach is cost, Townsend said. "In the private data center, we eat the cost of inefficiency. If a VM is sized too large, if it continues to run whether it needs the resources or not, we eat that fixed cost of those physical resources in our budgets," he said. "In the public cloud, that isn't the case. If a VM is running and it doesn't serve a business case, then you're paying for that resource."
(Image: Carsten Reisinger/Shutterstock)
Townsend said the next big challenge with lift and shift cloud migration is security. Private data center processes don't necessarily translate well in a public cloud environment. User access rights, for example, need to be handled differently. So security processes, along with monitoring and application performance, have to be rethought for the public cloud, Townsend advised.
Lift and shift can be a valid model, but organizations need to tread carefully, according to Townsend. "While lift and shift may allow for physical running of workloads on x86 virtualized hardware in the cloud, your data center processes need to be adjusted for the nature of public cloud…Make sure you don't neglect governance when considering moving to the public cloud."
(Image: Krisda Ponchaipulltawee/Shutterstock)
New abstraction layers
Public cloud has disrupted how most infrastructure is managed and the services infrastructure teams deliver, Townsend said. "Developers and applications owners are starting to ask for more capability in the base infrastructure, which is defining a new abstraction layer," he said.
This begs the question: What's the right abstraction layer to provide your developers and application owners?
"Should you supply raw containers or full-fledged, opinionated Platform as a Service, such as Cloud Foundry? This is not an easy question. It could be that you very well provide both," Townsend said.
PaaS platform like Cloud Foundry serve as a "quick on-ramp to providing a simple development experience," Townsend said. Instead of the IT team determining the best way to deliver functions such as message busses and authentication services, Cloud Foundry will have an opinionated way of handling them.
Given the complexity when it comes to abstraction layer choices, IT teams need to sit down and have some tough conversations, Townsend said. "You have to dig in there. You need to get involved with your application teams, with your developers, to understand what the requirements are. How do they plan on developing code or implementing and supporting packaged software or third parties?"
Ultimately, IT teams need to figure out which approach offers the most flexibility while limiting choice so that IT doesn't have to support too broad of a spectrum, he said.
(Image: Matej Kastelic/Shutterstock)