Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Introduction to Composable Infrastructure and its Use Cases

Data Center
(Image: Pixabay)

Composable Infrastructure is a pool of physical or virtual infrastructure that can be provisioned on demand as required. A defined pool of infrastructure can contain compute, network, and storage resources. Pools can be as small as a couple of compute nodes or contain multiple racks.

The composable infrastructure allows a business to improve resource utilization by providing infrastructure resources on demand and removing resources when they are no longer required. A high degree of process and automation maturity is required, coupled with a firm understanding of resource requirements.

Depending on the organization and workload characteristics resource demand can be predictable or sporadic. Predictable demand situations align to a scheduled based strategy where resources are dedicated to a task for a designated time window and changed to another afterwards. Sporadic demand situations are harder to capacity plan for and can align better to self-service infrastructure requests.

Banks typically perform processing of things such as fees and statements during a set time window overnight as a batch process. This is an example of predictable demand. Using composable infrastructure a bank could allocate physical resources to improve processing time, after processing window closes the resources could be allocated to another service, perhaps frontend services for branches.

Composable infrastructure and IaC

Composable infrastructure is an expansion of infrastructure as code (IaC), using infrastructure as code principals and processes to define a fully configured infrastructure stack. Code is used to perform the deployment and configuration of compute, network, storage, OS, and applications.

The workflow that composes the infrastructure needs to cater for the addition and removal of nodes to an environment, which may or may not currently exist. Building such a workflow requires a high level of organizational maturity and may not be compatible with all applications.

When working with bare metal provisioning, hardware needs to fulfill multiple roles. In other words, specific purpose hardware builds do not align well to infrastructure as code, unless there are multiple clusters or environments which use hardware for the same purpose.

Examining use cases

Like many universities Train University has different faculties, each consuming IT resources to accomplish their needs. Current IT management processes have led to many IT silos as specific faculties have dedicated IT infrastructure, with core services provided from a shared infrastructure. In the current state, maintaining and operating IT services is difficult and expensive, this has triggered an initiative to centralize IT infrastructure and where possible offer as a service.

Analysis of the current environments has found commonalities between faculties use of infrastructure. In some cases, faculties use infrastructure for the same purpose, such as databases. In other cases, the resource requirements for different purposes is almost identical.

The proposed solution is to have faculties request a pool of resources by a predefined scaling unit for a period. Faculties can provide some configuration settings where required. This includes OS deployment and hooks into configuration management tools for applications. When completed the infrastructure will be available and ready to consume.

A scaling unit is a logical construct to define how a solution scales, it could be a single server or an entire rack or something in between. A solution that uses distributed storage could have a scaling unit with a minimum number of nodes and scale by individual nodes afterwards.

After the request is submitted a workflow begins the process of configuring the infrastructure to provide the service requested. The workflow could configure physical infrastructures such as switches, storage arrays, and servers, and it could also configure logical constructs such as security settings.

There is a risk that multiple faculties require infrastructure for the same purpose at the same time. This proposed solution needs to account for these situations. The first step is to work with faculties from the start and understand if they have a schedule where they need resources available. This helps to understand if a recurring booking might be required and strategies for managing capacity during peak times, remembering that peak times vary between faculties.

Understanding the types of workloads that are commonly required provides valuable information for the most suitable hardware configurations, which suit a variety of solutions and configurations which suit specific solutions.

In the above scenario, there are trade-offs required to build a composable infrastructure solution which aligns to the directive given, and this is always the case in the real world. Talking to the consumers and setting expectations is essential to gaining acceptance.

In summary, composable infrastructure provides a cloud-like experience for the provisioning of on hardware and software resources for a solution on demand. A high degree of automation process maturity is required to provide end to end provisioning and decommission. A typical use case is the improvement of resource utilization and increased management efficiency.