Make sure to include the network in your data protection planning.
A data backup strategy is the backbone of any enterprise IT shop. Businesses need to protect their data from application or server failures, as well as improper data manipulation, deletion or destruction through accidental or nefarious methods such ransomware.
In planning their backup strategy, companies can overlook the network as part of the overall design. Distributed and server-to-cloud backups rely on the underlying network to move data from point A to B in a timely and secure manner. Therefore, it makes sense to include the network as an integral part of any data backup and recovery strategy. I'll discuss four ways to do that.
The first and most obvious step is to verify that your network maintains a proper level of end-to-end resiliency. Whether you are talking about local, off-site or backups to cloud service providers, the network should be designed so that there are no single points of failure that could potentially render a data backup or restore useless. A single point of failure refers to a device or link that, if it fails, will bring down all or a large portion of a LAN.
Also, consider how automated your network failover prevention mechanisms are. Traditional network redundancy techniques include dynamic routing protocols, HSRP/VRRP, VPN and WAN carrier diversity. More recently, SDN, SD-WAN and multi-cloud management are beginning to be included as part of a forward-thinking data backup roadmap.
Data backups have the potential to consume a tremendous amount of throughput. The major concern is that certain links along the way will become congested to the point that it negatively impacts other applications and users on the network. Avoiding network congestion by using a separate network that's purpose-built for backups is cost prohibitive. Most enterprises perform backups using the same network hardware and links as their production traffic.
Consequently, a key step in any backup strategy is to properly baseline traffic across the network to determine how backups will impact link utilization. Understanding data flows and throughput requirements of data backups along with current utilization baselines over time allows engineers to design a backup strategy that will not impact daily operations. In some cases, this means that the timing of backups occur outside of network peak hours. In other situations, it will require upgrading the throughput capacity of certain network links along a backup path.
Once a backup plan is in place, it's necessary to continue to monitor link utilization using NetFlow and SNMP tools to ensure that bottlenecks don't creep up on you over time.
Another way to mitigate the impact backups can have on a shared network links is to leverage quality of service (QoS) techniques. Using QoS, we can identify, mark and ultimately prioritize traffic flows as they traverse a network. Large companies with highly complex networks and backup strategies often opt to mark and prioritize data backups at a lower class. This is so more critical, time-sensitive applications, such as voice and streaming video, take priority and freely traverse the network when link congestion occurs.
Backup packets are queued or dropped according to policy and will automatically transmit when the congestion subsides. This allows for round-the-clock backups without the need for strict off-hours backup windows and alleviates concern that the backup process will impair production traffic that shares the same network links.
No conversation about backups is complete without discussing data security. From a network perspective, this includes a plan for extending internal security policies and tools out to the WAN and cloud where off-site backups will eventually reside.
Beyond these data protection basics, network and security administrators must also battle shadow IT. Shadow IT is becoming a serious problem that affects the safety and backup/restore capabilities of corporate data. Backups are only useful when they collect all critical data. Shadow IT is preventing this from happening because data is increasingly being stored in unauthorized cloud applications.
Tools such as NetFlow and cloud access security broker (CASB) platforms can help track down and curb the use of shadow IT. A CASB can monitor traffic destined to the Internet and control what cloud services employees can use.