Network Cloudification Key to CSP Energy Savings
Network operators seeking energy savings must apply a sustainability perspective at the design phase and support that with automatic network function optimization and orchestration.
August 1, 2023
GSMA recently released the second edition of a report analyzing the breakdown of energy consumption and net-zero initiatives in wireless cellular networks. It highlights key areas that can help with energy savings.
Although the results are rough averages with large deviations (and also quite different from the first edition), it concludes the majority of electricity is spent on operating the radio access network (RAN), which is not a surprise considering its geographical span and number of devices (base stations).
According to the report, core networks and data centers owned by CSPs account for roughly 13 to 22 percent of energy expenditure, but there are ways that CSPs can reduce the power consumption of core networks. That would lead to energy savings.
With the advent of 5G core deployments, and in general, the industry is heading towards virtualization, softwarization, and cloudification — including RAN functionalities — a large portion of network functions become software workloads running in data center environments.
This transformation is also driving the adoption of as-a-service (aaS) models for core networks, where the core is not owned by the CSP but consumed as services run on a third-party cloud infrastructure.
Cloudification has several benefits in lowering the TCO of core networks, such as flexible deployment and resource utilization, automatability, use of state-of-the-art IT hardware, and more.
Furthermore, it also brings a number of opportunities to decrease the energy consumption required to run core networks.
One straightforward advantage is in the mere scale of the hardware industry, which has power efficiency and energy savings among its top priority goals.
From data center site technology like air conditioning and lighting to rack and server solutions, including power supply unit efficiency, server components, storage solutions, and more, a large effort is spent on reducing-consumption technologies, which could be readily used in a cloudified core network environment.
Recent advances in server technologies allow tuning the frequencies of individual processor cores according to the software workload’s requirement. As power consumption is proportional to processor clock speed, this solution optimizes workloads for performance and consumption.
These methods incorporate closed-loop control of CPU speed kept at a minimum while maintaining workload performance.
Another approach is to associate power profiles, such as minimum and maximum clock speeds, which are required by software workloads and allow various network function components with different requirements to be deployed on proper hosting nodes.
Ultimately, current chipsets support turning CPU cores into power-saving mode, which could be triggered in a timed manner.
To exploit advanced power options of modern processors, proper architecting and dimensioning of network functions could already lead to significantly reduced consumption.
Overall, the deployment footprint, in terms of number of CPU cores used, should always be sufficient for the given network load level.
Minimizing your footprint to enable energy savings
In cloud core setups involving multiple network functions like a complete 5G core, the ensemble of all software modules should be jointly optimized to minimize footprint. This means that unnecessary CPU cores can be turned to power-saving mode under low load conditions, or even entire servers could be completely turned off.
This requires dynamic, automated scaling and orchestration (turn off and re-deployment) of network function components to free up CPUs. While automatic scaling has long been in the forefront of applying cloud technologies, there was no strong incentive for implementing it in live networks, as network-hosting hardware should be available for peak loads. Energy efficiency requirements are driving toward this goal now. Furthermore, using core networks in a pay-per-use aaS model also obliges this direction.
To enable automated scaling and orchestration, network functions should be implemented with a proper understanding of each component’s resource requirements under certain load conditions. This is a very precise, granular, and detailed dimensioning.
Furthermore, network functions should keep service even during these scaling operations. In the last few years, network function vendors turned towards microservice-based architectures and stateless design principles, which are key enablers for this.
As for network operators, the sustainability perspective should be applied from the design phase and supported by automatic network function placement optimization and orchestration. An example could be the availability of a renewable electricity source as a decision factor in the orchestration of network functions.
Moreover, to propel the energy savings methods mentioned above, operators should be very aware of traffic load and traffic mix both in geographical and time dimensions.
However, as a network is critical infrastructure, all these automatic power optimizations should be performed in a manner that does not compromise service availability and performance. Hence, minimizing power consumption is not a unique and ultimate goal for the network; rather, it should be provided with other key performance parameters.
Considering these stringent requirements and the complexity of power optimization methods, it is natural that various AI/ML models are being developed in the industry. These enable selecting and triggering the proper actions for scaling, orchestration, and CPU parameters setting to jointly optimize energy use with performance.
Marcelo Madruga is Head of Technology and Platforms of Core Networks at Nokia.
Related articles:
About the Author
You May Also Like