Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

How to Adopt DevOps on a Small Scale

Lone sys admins and small IT teams can reap great benefits in adopting DevOps practices. But with stretched resources, this can be very difficult to do. Moreover, the marketing message behind DevOps adoption is often lost on smaller companies, as it focuses on full-scale adoption. In reality, DevOps offers advantages for companies regardless of size.

Approaching the implementation of DevOps is the same as any programmatic solution: Break the solution into a series of small steps. The adoption of DevOps is done through the application of individual practices or components. Many components are independent of the others, each with a purpose and well-documented methods, tools and purpose.

By assessing each component on its own, you gain a better understanding of its value. In a resource-starved environment, this approach allows for cherry picking of components and quick wins. In this article, I examine several components and the value they offer to businesses of every size looking to adopt DevOps.

Version control system (VCS)

Version control improves management and distribution of code and text files. Instead of working with versioning such as ‘Script.ps1.OLD’, ‘OLD-Script.ps1’, the VCS maintains a history of changes. Additionally, changes can be rolled back quickly if required.

Git is the most common platform for version control and as a result well documented and mature. Git uses repositories to store and track files. Repositories can be locally stored or on a remote Git server.

DevOps

Git servers are light on resources and simple to deploy, providing a central repository of managed files. The use of a central repository enables simple distribution of code.

Many text editors and IDEs have integrations with VCS, which simplifies the process of working with managed files.

Configuration management (CM)

Configuration management uses code to define system configuration. Maintaining environmental consistency is a primary advantage of using CM tools.

CM brings additional advantages in troubleshooting, recovery and documentation. Configurations can be used to validate and environment as part of the troubleshooting process. If a system rebuild is needed, the last known configuration is already available.

Writing the configuration writes the documentation at the same time. While it doesn’t capture the intent or reason for the configuration, the settings do. This helps prevent outdated documentation.

The move to CM can usually be made incrementally, easing the learning process. A typical starting place is to ensure that DNS settings are correct. From there, you can define additional configurations, building on the skills you've learned.

Test driven design practices

Test driven design (TDD) is the use of unit tests to validate the code. Typically, tests are written first and then the code is written to pass tests. This practice has a steep learning curve, and it’s hard to see the value in the beginning. The benefit comes later with more reliable code and improved coding practices. Unit tests encourage a modular approach, which increases reusability. Having reusable code improves efficiency as the adoption of code-based practices increases.

As you gain coding experience, you'll want to improve previously written code. Unit tests can help validate functionality after refactoring.

Automated deployment

Automated deployment is a frequent use case for DevOps, often spoken about as a full end- to-end application provision. The reality is that deploying an application is just a matter of independent blocks stuck together.

When it comes to smaller IT shops, the value of automating deployment is often overlooked due to the shops' static nature and low number of deployments. Additionally, testing automated deployment requires resources. However, deploying systems through an automated workflow improves efficiency and consistency. The result is a more predictable end state and less manual work, which benefits any business.

The key to starting an automated deployment is to take an existing task and break it down into small steps. Then take those tasks and build a dependency tree. While the process is involved, it allows you to take a modular approach.

Perhaps the first step is to deploy an EC2 instance in AWS, then add an option to add disks. Next, use a configuration manager to apply OS changes such as DNS. These tasks have settings that are easy to misconfigure through manual processes.

In summary, the introduction of DevOps to an environment doesn’t require an all or nothing approach. By picking components, you’re able to see the independence of each one and assess which can deliver the best value. Additionally, this approach allows for an incremental implementation as resources allow.