Hitting the Script Limit

The scripts used to manage data centers are a good source of best practice, but they have their limits

January 12, 2005

4 Min Read
Network Computing logo

There's the old quip about the dog and its ability to walk on its hind legs: While the dog can accomplish the task, the results arent pretty. Not so different is the outcome when it comes to running data centers with traditional scripting languages. Once again, the result is not so pretty, particularly when managing complex IT tasks, like asset tracking, security compliance, multi-system patch management, server provisioning, and failover.

Yet, as you probably know, almost every data center is managed using (pick your poison) PERL, Python, or PHP scripts as the primary automation tool to drive infrastructure management best practices. Individual IT personnel tend to have their favorite custom scripts that they have created, which means that the best practice, in effect, is proprietary to them and not a part of the corporate memory. Which means: When Earl the IT guy goes to lunch, so too goes the best practice with him and his scripts – out the door.

As data centers become increasingly heterogeneous, on-demand computing models begin to take hold, and virtualization of compute, storage, and networking resources is continuing its move from missionary to mission-critical. There is a serious need for more sophisticated approaches to data center automation.

This is particularly obvious when contemplating recurring management tasks, such as patch management. Firms have to do patch management over and over again, and it can be challenging because it has to be done to a variety of different pieces of hardware and operating systems.

Tasks can also vary greatly based on the type of event. For example, a system failure will generate a different response based on whether the application runs on a single system or on multiple systems.While the script fanatic might be tempted to say, "We do lots of that type of stuff in scripts," the actual reality is closer to the walking dog, once you start factoring in the real world, where events, such as fault conditions or performance thresholds, need to trigger discrete actions. Add on the requirement for different tasks to be completed in a specific order, the various dependency conditions between steps in tasks, and the fact that tasks can range from manual to semi-automated to fully automated – and you have a recipe for automation that is brittle and unable to respond to real-time information. Generally, this does not scale to the requirements of today’s multi-tier application environments.

Consider, for example, the process of provisioning an e-commerce application. This might include first launching the database, then verifying that the database is running optimally, setting polices for dealing with sub-optimal conditions, then launching the application server, completing a verification path specific to the application server, then pointing the application server to the database, launching the Web server... and so on. The key point here is that, while such tasks can be done solely with scripts, the result isn’t pretty, given the challenge of tracking different events across multiple systems and the progression of tasks across distributed environments.

To be clear, I am not suggesting that IT shops throw away their scripts and start from scratch. It would be neither practical nor prudent to kill the dog. First off, legacy never goes away. We all want to feel like geniuses and we are too damn busy to start from scratch, but the perfect world of unlimited time, human resources, and money just doesn't exist.

Any viable solution to automating IT needs to embrace and extend my pre-existing arsenal of scripts while addressing the real-world complexities addressed earlier. That's just practical reality. Plus, it is good business sense, inasmuch as scripts, primitive though they may be, are the closest thing to an enterprise's best IT practices, and they actually offer a significant measure of automation.

So what’s the driver for making such changes sooner rather than later? Three long-term trends are driving most IT shops to embrace procedural automation approaches in a big way. By procedural automation, I mean management systems specifically designed for integrating multiple, discrete sub-tasks into a collective operation according to sequence, schedule, dependencies, and/or events.One: Data centers are shifting from fewer, expensive, proprietary clustering solutions to many low-cost Lintel/Wintel modular form factors, like 1U devices and blade servers (the price performance benefits of replacing one $1 million cluster with 100 $2,000 boxes are self-evident). But IT headcounts are remaining flat, creating a need to dramatically increase the number of servers managed per IT person.

Two: As open source and Linux continue to pick up steam, there is an immediate need to fill the gaps in the management of Linux environments. Also, companies need to bridge the gaps between Linux and Windows. This represents a greenfield opportunity for enterprises to re-work their data center management and automation with lifecycle orientation in mind.

Three: The emergence of on-demand, utility computing models and the related trend towards resource virtualization, which in itself will drive the need for management support for dynamic resource allocation.

To be clear, no one questions the power of scripts. But that power has its limits.

— Mark Sigal, CEO, UXComm Inc.0

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights