Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

9 Best Practices for Managing Infrastructure in a Containerized Environment

  • IT infrastructure professionals, particularly those who work on DevOps teams, are increasingly being asked to support containerized environments. But while tools like Docker and Kubernetes can make life easier for application developers, they place new demands on the people in charge of managing and monitoring hardware.

    According to Gartner, "by 2020, more than 50 percent of global organizations will be running containerized applications in production." That's a significant increase from less than 20 percent of enterprises that had deployed containers in production in 2017.

    Similarly, the Interop ITX and InformationWeek 2018 State of Cloud Computing Survey found that 42 percent of enterprises surveyed were running or considering running containers in the cloud. In addition, only 19 percent said they had no plans related to containers. By comparison, only 20 percent of respondents were either running containers in production or considering doing so in 2017.

    Clearly, organizations are embracing containers. And that means changes for the operations professionals in charge of managing data center hardware.

    For infrastructure professionals, some of whom have only recently completed the shift from supporting traditional workloads to supporting virtualized workloads; this transition is a major change. In an email interview, Eric Wright, director of technical marketing at workload automation vendor Turbonomic, explained that containers are fundamentally different than VMs. "Containers are operationally different in both the lifecycle — which is typically shorter and ephemeral in nature — and the procedures for managing them," he said. "The ephemeral container workload must be handled entirely as code from inception to termination, and that changes how we size, deploy, inspect, observe and control them."

    The following slideshow details nine best practices from Wright and other experts that can help IT infrastructure pros better manage containerized workloads.

    (Image: Pixabay)

  • Get ready to learn

    Because containers are so new, few IT professionals have experience working with the technology. That means organizations (or individuals) will need to invest in some training.

    "Although there is growing interest and rapid adoption of containers, running them in production requires a steep learning curve due to technology immaturity and lack of operational know-how," said Arun Chandrasekaran, research vice president at Gartner, in a blog post.

    "Be ready to learn and rethink operations, performance, monitoring, and infrastructure in general," said Wright. "Containers represent an exciting opportunity to bring new practices and methods into IT operations and to close the gap between applications and infrastructure."

    (Image: Pixabay)

  • Plan ahead for rapid change

    "Think about how to start, stop, manage, scale, and observe your environment when there is rapid change and non-persistent workloads," advised Wright. He recommended that IT infrastructure pros ask themselves five questions when planning for containerized workloads:

    • How do you size the container workload?
    • Where do you place the container workload?
    • How do you ensure you have the resources you need and that you can modify the configuration when fluctuations occur?
    • How do you operationalize when the workloads may come and go before a typical cycle (e.g., hours instead of days, and minutes instead of months)?
    • How do we increase elasticity and flexibility without impacting the performance of the application?

    (Image: Pixabay)

  • Enable self-service

    One of the big appeals of containers is that developers can easily spin them up and deploy their applications along with all their dependencies. But ceding that level of control to the development team makes some operations professionals nervous. They sometimes push back by implementing cumbersome processes and procedures in order to ensure that they have the visibility and manageability they need.

    However, experts say that limiting developers' self-service capabilities is a mistake that undermines the benefits associated with containers. They recommend looking for solutions that unite both self-service and monitoring capabilities — giving both developers and operations pros what they need to do their jobs.

    (Image: Pixabay)

  • Rethink your monitoring

    Many of the legacy monitoring tools that organizations have been using to keep tabs on traditional or virtualized workloads don't support containerized applications. As a result, enterprises will likely find that they need to deploy new monitoring solutions. "It's therefore important to deploy packaged tools that can provide container and service-level monitoring, as well as linking container monitoring tools to the container orchestrators to pull in metrics on other components for better visualization and analytics," said Gartner's Chandrasekaran.

    (Image: Pixabay)

  • Automate your networks

    Networking can be particularly challenging in data centers that rely heavily on containers. Traditional enterprise networking procedures and tools can't handle the speed of creation or the portability of containers, but the networking capabilities built into container orchestration platforms aren't yet robust enough to meet enterprises' policy management needs. Until that situation is remedied, organizations will need to choose their network management tools very carefully. Experts often recommend software-defined networking tools as a good complement to containers.

    Chandrasekaran advised, "I&O teams must therefore eliminate manual network provisioning within containerized environments, enable agility through network automation and provide developers with proper tools and sufficient flexibility."

    (Image: Pixabay)

  • Consider a hybrid approach to storage

    Storage is also troublesome in a containerized environment. VMs and traditional workloads need access to storage resources over the long term, but the picture becomes much different when dealing with containers that might exist only for a few minutes or hours.

    According to Wright, some infrastructure managers have created their own workaround to the situation. "What is becoming more common is using containers as a construct to use VM-like workloads with persistent storage attachment and network attachment, which introduces an interesting hybrid approach to how we manage them," said Wright. "The container purists would say that's 'using containers wrong' but it's proving to be a popular use case."

    (Image: Pixabay)

  • Don't forget about backups

    Containers may be short-lived, but that doesn't mean they don't need to be backed up. But because many disaster recovery and business continuity plans were drawn up before the advent of container technology, the tools and procedures enterprises have in place may not support containers.

    Experts recommend revisiting existing solutions and processes to make sure that containers and the data associated with them are being backed up. In some cases, that may require organizations to increase the frequency of their backups in order to make sure they are capturing containers that may have been spun up and shut down in the course of a single day.

    (Image: Pixabay)

  • Take appropriate security precautions

    Some experts say that containers are far more secure than VMs. Others — usually the ones who work for security vendors — say that containers are far less secure than other types of workloads.

    Practically speaking, IT infrastructure professionals need to work closely with security managers in order to make sure they are adequately protecting containerized workloads. That may include deploying some new security and monitoring tools, as well as carefully selecting the operating system distribution used to hose the containers. In its blog post on container best practices, Gartner noted, "The integrity of the shared host OS kernel is critical to the integrity and isolation of the containers that run on top of it. A hardened, patched, minimalist OS should be used as the host OS, and containers should be monitored on an ongoing basis for vulnerabilities and malware to ensure a trusted service delivery."

    (Image: Pixabay)

  • Expect some bugs

    While tools like Docker and Kubernetes have improved dramatically over the past couple of years, this is still fairly new technology. And that means all the bugs haven't been worked out yet.

    "Container orchestration, networking, and storage introduce new challenges at every layer that we have gotten used to 'just working' in virtualization environments," said Wright.

    However, Wright believes that these problems will lessen over time as more vendors begin offering supported versions of the open source tools that dominate the containerization market. He added, "There is a lot of complexity being introduced now, but that will ease up as more 'enterprise' container and container orchestration platforms emerge."

    (Image: Pixabay)