Almost everyone believes that some primary storage is moving into the cloud, but there is disagreement on how much is acceptable after balancing accessibility and security issues against cost benefits. Many companies are planning a hybrid approach to the cloud where some computing and storage is in a public cloud and the rest, including much of their primary storage, is kept in-house in a private cloud.
Hybrid cloud reflects the conservativeness of the IT industry, exacerbated by concerns about retaining jobs and sustaining budgets. The latter issue isn’t a small matter, since the technologies swirling about on the near horizon, such as containers, fast direct-write bus-based 3D XPoint memory, and higher core densities seriously threaten to shrink the size of the server farm dramatically over the next five years or so.
However, hybrid clouds are just a stop-gap measure. The migration of storage -- including primary storage -- to the public cloud is inevitable.
Cloud economics are inexorable and the new technologies will be picked up avidly by the mega cloud service providers, leaving no hiding place from the cost question. Using public clouds allows tenants to avoid almost all capex and also save on opex, mainly through the CSPs’ efficient buying practices, high levels of utilization, and very low maintenance cost.
The result is that more and more storage is migrating and the rate of increase of public cloud storage is accelerating, as analysis of AWS adoption shows. If this analysis holds up, the cloud will be the preferred place to store any data in a five years’ time.
A major problem for hybrid clouds is that straight transfer of data from the private store to a public cloud is bedeviled by our very slow WAN systems. (Verizon’s CEO said the company was going slow on fiber roll-out just two years ago!)The telcos were milking the lack of serious competition and this is a major impediment to hybrid clouds.
Some have proposed stopgaps such as having storage at colocation sites. This solves the speed problem between the “private” space in the colo and the public cloud, while meeting HIPAA, and other regulatory requirements. It doesn’t speed the link back to the in-house cloud one bit, though! Real-time compressionand encryption still elude us too.
Failing a solid fix for traffic jams on the WAN, we can expect the hybrid approach to put pressure on itself almost immediately. With public clouds exhibiting good security compared with IT in general, wise men will soon be asking if the economics and technical efficiency of moving all the way public makes sense.
The hybrid dilemma is complicated by the changing face of server technology. Intel’s 3D XPoint memory is a game changer, and one result will be much higher bandwidth out of the server, placing a further strain on networks. At the same time, deployment of that technology by the cloud providers, coupled with wide acceptance of the containers approach, will drive instance price points for the public cloud downward at an even faster pace, making an all-in transition quite attractive.
In the US, belated attempts by the telcos to provide fiber wherever Google is planning a roll-out isn’t anywhere close to matching the need, at least on the timeframe it’s needed. Fiber in big cities will take a decade to roll out, though businesses may get it sooner. This means that most businesses, in most cities, have no way of speeding up their public/private crosslinks in time for it to influence their decisions.
Other technologies will increase the pressure on moving storage to the public cloud. Tiering, compression and deduplication of stored data will preferentially reduce the mega-CSPs’ costs and, since they have the funding to lead technology roll-outs, expect their price reductions to hit early. Containers will be remembered as something from Google -- 3x the number of VMs per server!