5 Real Ways To Approach Security Automation
Is your business protected against security threats? The answer may depend on how often your engineers do manual work.
Ninety-five percent of all security incidents involve human error, according to IBM's 2014 Cyber Security Intelligence Index. This year alone, enterprises will spend $8 billion on cyber security, but these initiatives are often useless in preventing an engineer from misconfiguring a firewall or forgetting to patch a security vulnerability on a new server. Manual work is risk, and manual security work is a disaster waiting to happen.
Smart business leaders will minimize the probability of human error, and security automation is the best way to minimize risk. However, many enterprise organizations struggle with implementing best practices.
Here are five tips for automating enterprise security systems:
1. Automate infrastructure buildout first
Take, for example, a billion dollar health insurance company that launched its entire fleet of applications on Amazon Web Services -- all in the span of three months. It doubled in size year-over-year and added server capacity hundreds of times since launch, yet still only employs two system engineers.
Two system engineers for tens of thousands of servers is a fairly impressive ratio. In a traditional datacenter, the ratio is close to one engineer for every 100 servers. It's particularly impressive considering that as a health insurance company, it must comply with HIPAA's rigorous security and privacy standards.
Instead of hiring twenty engineers to deploy instances manually, two engineers can write and maintain the automation scripts that deploy instances without human intervention. In the world of infrastructure as code, you don't just deploy one server with a single command, you deploy fleets of templated servers with defined security configurations automatically in response to pre-determined events.
Automating infrastructure buildout significantly reduces the opportunity for engineers to make security mistakes, because engineers don't have to manually configure security groups, networks, user access, firewalls, encrypted volumes, DNS names, log shipping, etc. They don't have to "remember" best practices every time they spin up a new instance, because they only need to touch the scripts, not the instances, to make a change.
If your team has the manpower to only automate one aspect of a system engineering team's tasks, choose infrastructure buildout. It is arguably the most vulnerable time in an instance's life, and automating it eliminates countless opportunities for error.
2. Continually check instances across the environment
On the day Heartbleed was announced in 2014, many companies found themselves scrambling to update SSL across hundreds of thousands of servers and virtual instances.
In traditional IT, a major vulnerability like Heartbleed would mean every system engineer on staff working furiously for 18 hours to manually patch servers. For companies with an automation script, the only necessary change was a single line in the manifests to make sure the newly released version was running instead.
These script resources are declarative management tools that configure instances, virtualized servers, or even bare metal servers. When a new instance is launched, these tools are responsible for getting that instance ready for production, including security-sensitive configuration tasks like binding the instance to central authentication, installing intrusion detection agents, requiring multi-factor authentication, etc.
But crucially for security, these tools also enforce their manifests and will proactively change configurations on previously launched instances. This has two implications. First, as described above, it is possible to respond to security vulnerabilities quickly across all environments. Second, it also means companies can guarantee that these historical vulnerabilities stay patched, since any changes or mistakes on individual instances will be automatically updated once the script interacts with the instance. This prevents accidental regressions in security configurations.
Historically, some IT professionals have been skeptical of automation for security reasons. They argue that in the wrong hands, the same scripts that can be used to improve security can be used to access every part of your environment. This is a real concern. However, it can be addressed by any experienced automation engineer.
3. Fully automate deployments
Many IT leaders are busy implementing DevOps best practices, and automating deployments is one of the first processes to be reformed. But most IT leaders don't realize that automated deployments can also improve an enterprise's security posture.
In the Heartbleed example above, deployment automation ensures that changes made to the Puppet script can get deployed across every instance or server automatically on a schedule that ensures high availability. This makes it makes it possible for a single engineer to respond to a security threat quickly without having to manually reboot servers.
When looking for the right tool, keep an eye out for those that work across virtualized and public cloud instances. This will help maintain consistent security policies across environments, minimizing complexity and risk. Tools like AWS CodeDeploy or Jenkins allow teams to deploy code across multiple environments simultaneously.
4. Include automated security monitoring in deployments
Enterprise IT environments have never been more complex. Hybrid clouds are on the rise, and hundreds or even thousands of applications are spread across multiple environments at varying stages of cloud readiness.
When multiple clouds support individual applications, it is crucial that engineers are able to monitor the entire infrastructure in a single interface. When downtime or security attacks occur, it usually takes more time for an engineer to find the problem than to fix it. Unified monitoring gives engineers the intelligence they need protect core assets and contain the attack.
Enterprises already use tools to monitor their environments, but these tools are often custom built or monitor individual systems without a full 360° view of a multiple clouds. Instead, look for tools that offer automated reporting and trend analysis across on-premises and cloud environments, sophisticated intrusion detection tools, and governance features to help stay compliant.
When automation is already a part of your configuration process, installing these monitoring tools can be a simple matter of including their agents in the buildout template or machine image.
5. Prepare for the future of automation
One final word of advice to IT leaders: Don't wait until your hybrid cloud environment becomes a snarling mess of custom configurations to start automating security.
Automation is the force-multiplier enterprises need. IT is expected to be leaner and more responsive to new lines of business, while maintaining more complex infrastructure with the same (or fewer) staff. On top of that, custom hybrid architectures for individual applications are becoming more common. Budgets and engineering time are spread thin.
Automation may not shrink your engineering headcount, but it will allow your engineers to work more quickly and securely. Ultimately, automation will help enterprises move as fast as startups. As abstraction increases, it doesn't matter if you're deploying to 10 servers or to 10,000 servers. Automation puts enterprises that want to match startup project development speeds on an equal playing field.
Within five years, as data balloons and hybrid environments become more common, the manual security approach will be impossible to maintain. Take the time now to develop an in-house automation team or outsource it. It may take months or even years for enterprises to achieve end-to-end process automation across hybrid environments, but automation will prove infinitely more valuable than employee training or project managers at reducing human error.
Recommended For You
There isn’t a standard way of performing an application baselining or profiling. Here is a how-to video with suggestions on how to work through the process.
Hybrid and edge data centers are expanding the role of the traditional data center. This makes DCIM more important today. As with any management software, organizations need to know when it makes sense to keep it on-premise versus going with cloud-based DCIM.
The Interop 2019 speaker discusses ways that enterprises explore DevOps, the skills gap, and the rise of security as code.
Composable infrastructure provides a cloud-like experience for provisioning resources. Understand how it works and how it differs from Infrastructure as Code.
Big changes are happening with data center management as emphasis shifts from core to edge operations. The core is no less important, but the move to the edge opens new challenges as the environment becomes more complex.
Video overview on how to use a portable WAN emulator to validate bandwidth requirements to a backup server.