Network Computing is part of the Informa Tech Division of Informa PLC
Cloud Migration: Going Live
When moving workloads, data and applications from a traditional physical or hybrid hosting environment to the cloud, details matter. The process can be enormously complex and fraught with potential technical and logistical obstacles. Understanding how to navigate the migration process efficiently and effectively can save significant amounts of time and money.
This is the final post in a four-part series of articles describing best practices, and detailing the priorities and pitfalls that must be addressed and avoided, when performing a cloud migration. In my last post, I discussed source environment preparation and execution management, as well as testing and security protocols that should be in place immediately prior to the go-live event. In this final piece, I review all the elements of a successful go live, and highlight specific strategies for post-migration maintenance and development operations.
Prepare a reliable rollback plan
Your first priority must be to have a reliable rollback plan in place. If there is a problem during the go-live event, or if the process is taking longer than your go-live event window allows, the ability to execute a full rollback is critical.
Should issues arise that result in potentially exceeding projected timelines and impacting live services during normal business hours -- which can be even more of an issue when an organization has an SLA in place with their customers limiting downtime -- engaging your rollback plan can help prevent costly business disruptions.
Organize a database freeze
Coordinate a suspension of database activity and halt content changes (to the extent possible) before the go-live event. If there have been code modifications to the environment during the migration, it's important to document and communicate those to the migration specialist so that those pieces of code can be updated and synchronized. To avoid unnecessary delays and complications, perform a fresh sync of the data prior to going live.
Take proactive steps to avoid disruption
If the migration requires no downtime at all, a reverse proxy technique can be used to simply forward traffic to the destination servers, ensuring that all users are immediately directed to the new environment, and avoid a potentially lengthy wait for DNS propagation.
If possible, try to stage a trial go-live event, especially when dealing with large databases or large data volumes. This can provide a better sense of the time required to synchronize the data, and help determine if your timeline goals and projected downtime are realistic.
Perform load testing and performance assessments
Once your systems have been updated and any web services or business applications have been made live in the new environment, have resources in place to monitor the load/traffic on the sites and assess the performance of the applications to make sure everything is running smoothly. This is especially important if you elected not to perform a prior load test.
Deploy post-migration maintenance resources
Once the migration is complete and you are live in the new environment, a number of monitoring and maintenance priorities emerge. If you are working with an unmanaged provider, you will need to actively patch the servers at the operating system level with the latest security patches (managed hosts usually take care of that). If you have implemented a DevOps tool chain as part of the migration, be sure to have resources in place to update the software involved in that tool chain on a regular basis.
At an enterprise level, the goal should be to review updates and implement them on a monthly basis. If you are actively making changes to applications that may impact the resources required for those applications to run, consider performing load tests on the updated code on a regular basis -- perhaps quarterly or semiannually -- to ensure that the application’s performance has not been degraded.
Address security vulnerabilities
Once established in the new environment, run vulnerability scans on the servers and on your websites at least quarterly (and preferably monthly) to ensure they're secure and that any identified vulnerabilities can be addressed. Regardless of whether you decided test your disaster recovery plan before going live or not, make sure to periodically test it on a regular basis going forward -- quarterly or semiannually at a minimum.
Review analytics and practice ongoing vigilance
For ecommerce or web applications, it makes sense to use software, such as Google Analytics, to monitor usage trends going forward. A sudden drop in daily visits or a significant difference in bounce rates could indicate a problem that needs to be addressed.
Having the right monitoring software (such as Sensu or Nagios) in place to monitor your site and track the data makes it more likely that you can track down and resolve any issues that arise. Some tools can literally trigger an alarm if there is a major change in the functionality of the site or the web-facing applications, making it possible to prevent a potentially costly and time-consuming disruption.
From pre-migration planning and technical assessments to upgrades and testing, undertaking a cloud transformation project is no easy feat. Possessing a firm understanding of the current best practices for successfully migrating workloads, data and applications from traditional hosting environments to the cloud has become more important than ever. But with the proper strategy and an appreciation of the time and effort involved, organizations can better prepare for a cloud transformation and leverage the long-term benefits of cloud computing.
Recommended For You
Enterprise IT managers seeking to deliver great online experiences should use the same strategies as those responsible for delivering streaming coverage of massive online events like the World Series and the US Open.
Cloud data security must become a top priority for organizations to protect the integrity of their businesses. Here are some thoughts on how to do that.
The companies will combine efforts to offer more software solutions aimed at AI cybersecurity.