For many organizations, 2020 was the year of maintaining business continuity. If last year was keeping the lights on, 2021 is about operationalizing what is now our new normal. Without a doubt, the pandemic has proven work from anywhere is feasible, accelerating what was inevitable. For the sake of improved productivity and efficiencies, organizations were already diversifying their technology stack by adopting SaaS models and migrating mission-critical business to the cloud (or multiple clouds). A recent study my company conducted indicates that 92% of IT leaders report their organizations were moving or had already moved to a hybrid work model, in addition to maintaining valuable, on-premises legacy solutions.
Then we had a sudden shift to remote work where users could access cloud services and freely download applications directly, outside of their company’s network. IT now faces a mountain of technology sprawl, not to mention real shadow IT challenges. In fact, nearly 50% of IT leaders said controlling SaaS sprawl is currently their biggest challenge, followed by discovering unmanaged applications (26%).
And if that wasn’t enough, alongside a growing reliance on cloud is emerging security risks. Early in 2021, the SolarWinds breach became public knowledge, and the details are worrisome. This breach taught us two important lessons: failure to defend your technology supply chain could give attackers the one opening they need to enter your network, and unnecessary complexity is risky.
The complexity conundrum
Today’s IT teams are challenged to operationalize their changing technology mix and manage the risks that come with it, especially with cloud environments.
For example, do you know how many cloud environments your organization uses today? What workloads are running, and who is using them? Do you have more licenses than you need? Without visibility into what you have and how you use it, you’re probably overspending and underutilizing, and likely creating significant security risk and potential compliance failures. Take application development as one example. Much of today’s application development has shifted from a completely build-from-scratch model to one where you’re building while assembling a collection of open-source components and cloud services. This enables fast, easy development but creates blind spots when those open source projects receive updates/fixes but are not propagated to your product. This could lead to increased supply chain risk, as was the case with the SolarWinds breach.
When considered on a larger scale, the complexity-driven security and compliance risk can be even more costly. If you’re a hybrid or multi-cloud customer who also relies on certain on-premises solutions, your legacy security stack probably doesn’t support the mix as well as it should. And your security team may not have the in-depth skills to fully understand cloud containers, on-premises legacy systems, mobile devices, and endpoints. Your choice then becomes sub-standard security or too many cooks in the kitchen, each with their own technology agenda, which only increases your complexity.
The need for a better mousetrap
Further complicating security in the cloud today is the Shared Responsibility model. When you rely on a third-party cloud service like Amazon AWS, Microsoft Azure, or Google Cloud, you only receive a baseline level of security. Organizations too often assume ‘Amazon is protecting our data’ when, in reality, you have an interconnected spiderweb of applications and permissions, each impacting all other systems.
When something goes wrong, you can’t call up Amazon to fix it. Instead, who can help you address the problem? Your internal staff? The cloud provider? Figuring out where the issue arose is more like a game of Clue where you’re searching for answers. It’s a vicious cycle that can result in no real progress.
The best defense to this complexity is to understand what your third-party cloud provider (or providers) are responsible for, as written in their Shared Responsibility policy, sharing with your IT and security team. With that baseline in place, you can build out incident response plans from there.
The second step you can take in improving your security and compliance risk is automation. To continue the example of application development, your team may have hundreds of source code repositories with dozens to hundreds of components all pieced together into a portfolio of products. It is impossible to stay on top of everything with a manual process. Automation quickens the pace and drastically improves accuracy.
Maintaining visibility over the complex menu of cloud services, applications, on-premises legacy systems, and whatever else is mission-critical to your organization is complex, but visibility is essential to managing your security and compliance risk and necessary for due diligence over your IT budget.
Shine some light on the blind spots
Having a heterogeneous IT environment has its benefits – more choice, maximized budget, and building a resilient technology backbone. But one chink in the armor, and everything is suddenly precarious. Sorting out how to fix the problem isn’t easy. With visibility into your network, cloud services, product development, and your users, you can make significant gains across your security and compliance risks and your budget. Without it, you’re left floundering in the dark.
Jesse Stockall is Chief Architect at Snow Software.