IT’s Disconnect With Connected Users

Survey shows a widening gap between the needs of users and IT's ability to deliver against that need.

Doug Hazelman

May 23, 2016

3 Min Read
NetworkComputing logo in a gray background | NetworkComputing

When major consumer online services experience downtime, it’s big news. Screams of “Facebook is down!" "Netflix isn’t working!" "Twitter Fail Whale!” echo everywhere. People expect these services to be available 24/7, and when there’s a sustained outage, their reaction nearly reaches apocalyptic proportions. Consumers are constantly connected and expect online services to be always on.

When people go to work, they now expect their IT services to be available as well. Most major online services have built highly resilient infrastructures that not only span countries, but also the globe. They have the budget (and the reason) to build these fault tolerant platforms. Most online services also enjoy a good measure of homogenous infrastructure. They’re often only supporting one service, so they can deploy it across tens of thousands of instances of exactly the same hardware and software.

However, in the data center, it’s a bit different.

IT administrators typically deal with very heterogeneous systems, entailing multiple types of hardware and software from different suppliers. Keeping these systems always on is a much more daunting task because you have to juggle multiple different service-level agreements (SLAs) across multiple platforms. For IT administrators and even their CIOs bosses, the struggle is real, so much so that 84% of IT decision makers  recently surveyed by Veeam reported that they’re not meeting the availability needs of their users. This marks a 2% increase over a similar report we conducted in 2014.

The survey finding illustrates a very large gap in the needs of users and the ability of IT to deliver against that need. With almost 50% of workloads now being deemed “mission critical,” it’s more important than ever for IT to deliver true availability. The fact is, the pace of change in the data center is greater than it’s ever been, and likely won’t slow down.

This means that IT decision makers need to meet the needs of the legacy systems as well as the new technologies that will hopefully replace them. Virtualization was a good start, allowing consolidation of servers and applications, while ushering in a new breed of protection and availability. The cloud also promises to increase availability, but with so many definitions of “the cloud,” it can be difficult to figure out what’s best for the business.

One of the best and emerging uses of cloud technologies is for disaster recovery: Simply replicate your environment to the cloud and then fail over when needed. Using the cloud for DR can be much more cost effective than building out your own site. Still, if you’re using a software as a service vendor, you need to make sure that they’re prepared to deliver true availability. After all, once the platform is outside the walls of your data center, you have little control over keeping it up and running.

Delivering availability is now much more complex than just a nightly backup of the servers. If you’re an IT administrator, decision maker or a CIO, just know that you’re not alone. Veeam’s 2016 Availability Report shows that even as IT spends more, the availability gap is widening and users are demanding more.

Read more about:

2016
SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights