The Case For Change Tolerance In IT

Change is considered dangerous in IT, but IT teams must become more flexible and dynamic to keep up with today's business environment.

John Villasenor

September 18, 2015

6 Min Read
Network Computing logo

It has become a truism to state that the accelerating pace of business requires a corresponding acceleration in the pace of IT. That’s all well and good, but those of us who came up through the IT trenches learned the hard way to value stability and predictability over agility and speed. The typical mottos of IT departments are “If it ain’t broke, don't touch it,” and “Slow and steady wins the race.” In this world, change is dangerous, or at least suspect, and needs to be managed carefully.

The world that IT operates in, however, is changing rapidly and some of these attitudes need to be re-examined. Users’ expectations of IT are shaped by their experience outside the office, and they require the same type and quality of service at work that they receive at home or during their leisure time. These days, most people have a smartphone and expect to be able to use it to compare ratings and cost of services, share information, and collaborate seamlessly without artificial barriers. IT teams that are used to a more command-and-control approach can struggle to respond appropriately.

The same patterns that affect internal users also affect external customers. As their expectations continue to evolve and change, the business has to move equally rapidly to keep up and continue to serve its customers. This requires a corresponding acceleration in the development and release of the IT systems that support any new or changed business offering. The performance of IT systems is, in a very real sense, the performance of the business.

The problem with change

Enterprise IT departments are often caricatured by users as  the "Department of No,” but there's a reason for IT’s caution when faced with new requests, such as hosting applications in the cloud or using iPhones for work email. Users may only see one small part of the change, but IT pros are very much aware of everything that is required behind the scenes to support it.

Any change is an iceberg, with 90% of it invisible to users. IT ops teams, however, are always aware of all the other changes that are required, not only to deliver the requested change in the first place, but to continue to maintain the resulting system in the future.

However,  closing the door on impending change is no longer a feasible option. Organizations that refuse to evolve will be out-competed by their peers or by startups who can roll with the changes. Just in the last five years, Kodak went bankrupt and the iPhone became the most popular camera around.

Given that reality, IT operations teams have to figure out ways of delivering that accelerated rate of change, but doing so safely. Part of the reason why making the change is not as straightforward as it might seem is because of all the subsidiary changes that need to be made to other systems to avoid negative consequences which might extend to downtime, outages, or security breaches in business-critical systems.

One big problem is that in IT, yesterday’s good decision is tomorrow’s problem.  But it’s rarely possible to rip it out and start from scratch because so many other things have been built on top of it in the mean time.

A perfect example of this phenomenon is event management. In the early days of any service, the focus is on getting it working. Once that's achieved, the focus shifts to keeping it working, which means monitoring it in some way, capturing and tracking all the events that might affect that service like log files and SNMP traps. This is event management at its simplest.

Problem solved? Not really. Once you get beyond trivial scales, it’s no longer possible to manage this information interactively, so organizations turn to umbrella systems that can capture monitoring information from any number of sources and use mechanisms to sift the data.

This was once all very well and good, but now maintenance of those rules has itself become a significant overhead to the IT team, and something that actively delays or even prevents timely delivery of change requests from the business.

Embrace change

How do we break this impasse? What enabled us to get to this point is also what is preventing us from taking the next step, but we must take that step if we are to deliver what the business requires and expects of IT.

A new approach to IT has been emerging over the last few years, moving beyond the requirement for stability and predictability at all costs. Modern IT is flexible, dynamic and reactive. What enables this shift is not so much rapid delivery of services themselves -- although that is a requirement -- but an evolution in the infrastructure that underpins those services, and in how we manage it.

The unit of measurement of the new IT infrastructure is not a physical server, but any number of compute options from physical and virtual servers to containers and various forms of private and public cloud.

When I started in IT, our reference was a Visio chart, printed out and hung on the wall near the computer room. It was updated a handful of times a year. Today, a very large IT organization I know has “ASSUMED OBSOLETE IF PRINTED” baked right into the template for all of its IT documents!

The same thing goes for rules. In theory, a system based on rules like IF EVENT FROM SYSTEM A THEN SEND ALERT is all well and good, but what if SYSTEM A just moved to different hardware, or scaled out, or is one node in a cluster, or something else entirely? And what if it changes again tomorrow and again next week? You can’t keep up. You need management systems that can move at the same pace as what they are managing, or you will end up taking decisions based on bad data: “garbage in, garbage out,” as the saying goes.

IT organizations are at a crossroads, struggling to figure out what their role should be in the next decade. One thing is clear: It won’t be the same as it has been for the past decade with IT teams in full control of their destinies. That ship has sailed (laden with containers).

On the other hand, in-house IT operations can deliver enormous value to the business, precisely because they understand the 90% of ops that is invisible to users, below the waterline. By all means change the 10% -- move servers out to the public cloud (if that won’t get you in trouble with the compliance team, of course!) -- but make sure that when you do that, your management approach can keep up. That's how you ensure IT is delivering for the business. If you don’t make that change soon, then you better be change tolerant with your career.

About the Author(s)

John Villasenor

Vice President of Worldwide Technical Services

Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like

More Insights