The Cloud-Native Journey for Telecommunications: Leveraging Cloud Technologies the Right Way
The industry is turning to NFV and the Twelve-Factor App, a methodology for building software-as-a-service apps that are suitable for deployment on modern cloud platforms.
October 23, 2020
Digital transformation initiatives and the new business workflows they create are fueling the move to cloud. In fact, Gartner found that while overall IT spending this year is likely to fall by 8 percent, spending on cloud computing is expected to increase by 19 percent. Specifically, Gartner expects cloud-based telephony, messaging, and cloud-based conferencing will see high levels of spending, growing 8.9 percent and 24.3 percent, respectively.
Enterprises are looking to capitalize on emerging cloud technologies to drive differentiation and deliver a competitive edge for their organizations. Communications service providers (CSPs) play a critical role in delivering agile, secure, flexible, multi-cloud, and multi-access solutions to enterprise customers. Together, these emerging technologies are pushing the boundaries of age-old business practices and strategies. It is also creating opportunities for CSPs and enterprises to co-innovate and build ecosystem partnerships as nearly every industry strives to reimagine their business models.
‘Cloudification’ With NFV
When the telecommunications industry first started leveraging the cloud, Network function virtualization (NFV) quickly became the predominant network architecture. The goal of NFV was to offer Software-as-a-Service (SaaS), but the building blocks are Network Functions (NF), something not traditionally thought of as software in the same sense as enterprise applications in a SaaS context. Some of the early attempts to “cloudify” involved porting the underlying software from dedicated bare metal into virtual machines (VM), allowing these Virtual Network Functions (VNFs) to take advantage of the basic cloud tenants.
The core architecture for NFV requires multiple layers of hierarchical control through orchestration to manage the extensive and intertwined dependencies. However, the use of 1+1 redundancy schemes between VMs to ensure high availability sometimes makes the resource consumption worse off than the dedicated appliances they are moving away from. Configuration becomes so complex that the orchestration engines have to be augmented with custom VNF managers from the vendors, further complicating interoperability. This left many looking for alternative network architecture approaches.
People and process story
Technology, or how it is realized in the case of NFV, is not the only factor in achieving cloud agility. Automation is one of the most effective tools for agility, but it needs to incorporate both development and operations to be successful.
Build and deploy should be a one-step process, based on a shared, version-controlled repository so that everyone knows where to look. Likewise, shared metrics and data flows enable everyone to be talking about the same thing and oriented to the same goal. Finally, both should share the same communication channels to establish a platform for more automated closed-loop processes and ensure there is no misunderstanding.
DevOps 1.0 has been mostly centered on harmonizing the interplay of development and operations with the goal of institutionalizing continuous delivery. In DevOps 2.0, we are seeing the emergence of adaptive feature delivery and the broadening of scope to non-technical teams. Business adopts cross-functional methods to ensure that software is iterated on a continuous cadence complementary to marketing and sales campaigns. Decoupling feature rollouts from code deployment gives marketing, design, and business teams control of targeted visibility and testing without consuming engineering resources or compromising the application's integrity.
Last but not least, DevOps 2.0 elevates security as a fundamental element from beginning to end with the mindset everyone is responsible for security. This is sometimes referred to as DevSecOps. By thinking of security as code, cybersecurity specialists are given the tools to contribute value with less friction.
Delivery versus deployment
While continuous deployment may not be right for every company, continuous delivery is an absolute requirement of DevOps. What really then differentiates the two? Fundamentally, delivery is having the pipeline contain all the mechanisms and practices to ensure the software is fit for production by placing every change into a production-like environment with rigorous automated testing. This gives the business the confidence that, based on its criteria, every change could then be “push-button” deployed to the production environment. If regulatory and business conditions allow, the goal of the business should be to have continuous delivery into production for every change.
Better software for a better cloud
Taking NFV as an example, having an underlying cloud technology like virtualization did not mean that software was able to be developed in a way that maximized its capabilities. Even with the drastic improvement DevOps made for deployments, the way software was developed still prevented reaching agility at scale in the cloud. Through adjustments from the community, a manifesto appeared, serving as the foundation for a microservices architecture known as the Twelve-Factor App.
Twelve-Factor App is a methodology for building cloud-based applications to promote portability and enable build/test automation, continuous deployment, and scalability. The microservice architecture and concept of a twelve-factor application gives developers a pattern to create software that is considered truly native to the cloud, while DevOps becomes paramount for managing the agile cloud within. However, technology continues to progress. This is best illustrated by the move from hypervisor-based virtual machines to container virtualization directly in the operating system, which is lighter weight and better for microservices.
However, questions like how to orchestrate large quantities of containers and how to manage containers on multiple clouds exacerbates the need for the industry to chart a path through the landscape of ever-increasing technologies to help developers make sense of it all. This is where the Cloud Native Computing Foundation (CNCF) comes into play, serving as an open-source software foundation dedicated to making cloud-native computing universal and sustainable.
By embracing CNCF’s new standards and working towards cloud-native, CSPs can extend their network and business practices, allowing them to easily consume and integrate cloud services, as well as provide their own new services like 5G network slices to enterprises customers. In turn, this will allow operators to move into vertical services and fully monetize their own value- making the cloud-native journey for telecommunications worthwhile.
About the Author
You May Also Like