The Legacy Systems Lurch

From infrastructure to app delivery, from data to applications, it’s past time to modernize your practices, processes, and providers to ensure you’re able to take advantage of AI and whatever comes next.

Lori MacVittie

September 29, 2023

4 Min Read
The Legacy Systems Lurch
(Credit: Andrey Suslov / Alamy Stock Vector)

When we talk about legacy systems, we often think of outdated servers and switches languishing in a data center somewhere. We read with morbid fascination about systemwide technology issues that strand fellow travelers over holiday weekends and shake our heads at their lack of foresight.

And then we sit in front of a screen and happily turn our business over to providers by leveraging generative AI services without a second thought, humming as we leap into the age of AI while productivity gains dance in our dreams like visions of sugarplums on the night before Christmas.

All without having considered that we are falling victim to the same legacy systems lurch.

Every enterprise that’s been in existence for more than a decade has legacy systems. The march of technology makes that inevitable. The infrastructure and applications carefully constructed and curated over the years were obsolete within the first year—if not months—of operation. But the impact is rarely felt until the next technology wave crashes into us, and we’re left relying on third-party providers to enable the business to take advantage of its benefits.

Modernization of legacy systems is critical

Too many organizations did not invest in modernizing their infrastructure or embrace more scalable operational practices and are now left without the data, pipelines, and capabilities needed to leverage AI of any kind—traditional or generative—without reliance on someone else.

That’s not to say that using third-party (cloud) providers is a bad thing. Indeed, leveraging public services can be a strategic advantage, such as when adopting security as a service to thwart the onslaught of bots to prevent fraud or standardizing on a multi-cloud networking provider to enable core connectivity across core, cloud, and edge.

But there is risk when that reliance involves sensitive data. From the risk of accidental exposure of customer information to the risk of exfiltration of code and other trade secrets critical to the success of the business, the reality of relying on a third-party provider for AI is existential. That risk is not just from exposure of sensitive data but from not even knowing if data, once shared with the model, might be impacted by a breach.

There is financial risk, as well. This a lesson we've seen learned the 'hard way' by hundreds of enterprises who embraced public cloud without a strategy, lured by the promise of agility and instant innovation capabilities, only to learn later that the cost would quickly escalate and eat up all the gains.

But too many organizations don’t have the infrastructure, practices, or data to deploy a private LLM or any other AI model. And part of the reason why lies in a lack of modernization.  

New apps not possible without modernizing legacy systems

Modernizing is not sexy. It’s rarely exciting. But, it is necessary if enterprises are going to be able to sustain and secure business for the long term. From infrastructure to app delivery, from data to applications, it’s past time to modernize your practices, processes, and providers to ensure you’re able to take advantage of AI and whatever comes next.

No one can tell you exactly how to modernize because every enterprise architecture is unique. However, there are specific outcomes you can seek to achieve that will help you develop the right modernization strategy:

  • Ability to operate seamlessly across core, cloud, and edge (infrastructure)

  • Ability to automate deployments, threat mitigation, and performance optimization (app delivery and security)

  • Analysis of collected telemetry to produce actionable operational insights (data and observability)

  • Operational practices that focus on improvement and avoidance rather than find and fix (SRE operation)

  • A security strategy that starts in development and covers every digital asset: data, documents, and code (security)

There are existing practices—like Secure Development Life Cycle (SDLC)—and services—like Multi-cloud Networking (MCN)—and approaches—like Zero Trust (ZT)—that you can adopt to accelerate modernization efforts. Ultimately, modernization is a significant undertaking that requires a strategy, and that strategy must be developed through a collaboration that includes business and technology leaders to ensure minimal disruption with maximum success.

There is no doubt that we’ll continue to see systems very publicly fail and, inevitably, suffer from some AI-based breach. Modernizing can help avoid becoming another company on the growing list of those who have fallen victim to the legacy systems lurch.

Related articles:

About the Author(s)

Lori MacVittie

Principal Technical Evangelist, Office of the CTO at F5 Networks

Lori MacVittie is the principal technical evangelist for cloud computing, cloud and application security, and application delivery and is responsible for education and evangelism across F5's entire product suite. MacVittie has extensive development and technical architecture experience in both high-tech and enterprise organizations. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she authored articles on a variety of topics aimed at IT professionals. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University. She also serves on the Board of Regents for the DevOps Institute and CloudNOW, and has been named one of the top influential women in DevOps.

Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like

More Insights