I am happy to say that Network Computing is now IPv6 ready. That's a pretty neat leap for us, and when I get around to setting up a tunnel broker from my home office, I will be enjoying some IPv6 goodness. The neat thing about being IPv6 ready is that whether you are coming from an IPv4 or an IPv6 network (and, frankly, you won't know or care which), you will still get to the same place. Making Network Computing available via IPv6 means you will get to us no matter where you are. Congrats to our IT team who got us there.
Getting up to IPv6 isn't always as straightforward as it seems. Network Computing is running on a web farm that is load balanced using an F5 BigIP load balancer. That way we can add more servers when demand requires it without making any changes outside the Qwest co-lo where the servers are located. Our IT department had to wrestle with a few significant issues in deciding how to roll out IPv6. First and foremost, the Qwest co-lo doesn't have IPv6 coming in yet. Our CIO, David Michael, has been asking about IPv6 availability for two years and hopefully will hear some good news soon.
IT looked to alternatives. We needed a solution to transparently put www.networkcomputing.com onto the IPv6 network without impacting the existing server farm or our web analysis product, Adobe's Omniture. Rather than moving Network Computing to an IPv6 ready co-lo, which was a non-starter, IT set-up a pair of CentOS servers running Apache proxies and gave them IPv6 addresses. With this cluster, IT can add more servers if demand requires it, and it is at a location where IPv6 is available. IPv6 Network Computing traffic comes to this office and is then proxied to the Network Computing IPv4 servers.
Networking purists will quickly point out that we are breaking the end-to-end nature of IPv6 by using a web proxy, and they are right, but doing so provides sufficient connectivity so that all of our features are available. It also requires no changes to our server farm while we transition to full IPv6 support. Once IPv6 is plumbed into the Qwest co-lo, the plan is to move the IPv6 address to a virtual IP to the F5 load balancer and proxy in-bound requests to our web server, as we do now. Even then, Michaels said, and UBM (our parent company) Hosting Manger Jason Kates concurs, that we will likely retain the current IPv4 addressing behind the load balancer, since there is no reason to migrate the back-end to IPv6 for the same reasons that companies use RFC 1918 reserved address space for private IPv4 addressing today.
Like many other organizations, we have to migrate to IPv6 at some point, and this is the first step in the process--getting our public-facing servers ready. There is no rush to roll out massive changes, and by taking the transition in smaller bits, you will be able to manage the transition seamlessly. I imagine we can live on this transition for several years until traffic load or technical dependencies requires us to move further