What does water have to do with hybrid IT and Interop ITX? According to my friend, Dave McCrory, data is the new water in the hybrid IT paradigm. I was fortunate enough to attend McCrory’s presentation at Interop ITX during the Future of Data Summit. McCrory is currently the CTO of Basho, and he famously coined the term “data gravity” in 2010. Data gravity -- or as his friends have come to call it, McCrory’s Law -- simply states that data is attracted to data and will continue to attract more data.
At Interop ITX, McCrory took this idea a step further: Data has reached such a critical mass that processing is moving to it rather than the data moving to processing. This scenario is playing out in the various trends associated with hybrid IT. For example, the data may reside on-premises, but the computing and data analytics services are consumed in the cloud or vice-versa. The data is aggregated in the cloud and processing is likewise done in the cloud, but the results are delivered back on-premises for decision-making consumption.
Next, McCrory introduced the notion of data agglomeration, where data will migrate to and stick with services that provide the best-in-class advantages. Examples of this concept include car dealerships and furniture stores being in the same vicinity, and major cities of the world tending to be close to large bodies of fresh water. This is all about long-term sustainability.
With respect to cloud services, this is the reason why companies that incorporate weather readings into their applications are leveraging IBM Watson. IBM bought The Weather Company and all its internet of things (IoT) sensors, which produce an ocean of data and continue to record massive amounts of weather information. With IBM Watson putting that weather data through deep-learning models, there is intrinsic value in leveraging IBM for weather data analysis and prediction models.
In these instances, data may seem like oil because of the value-add potential from data analysis and machine-learning constructs. However, as McCrory pointed out, the one thing that the data lacks prior to providing utility are insights. According to McCrory, insights derived from data will provide the tipping point in certainty and knowledge that will ultimately lead to action, which is putting data to use.
The action of putting data to use leads to artificial intelligence (AI), then machine learning, which is a subset of AI, and deep learning, which is a subset of machine learning. The good and bad of data actions can be determined from three axis: monitoring, optimization, and control. With monitoring, one can determine the quality of data; with optimization, one can determine the efficacy of data; and with control, one can determine the best way to put that data to work. The key is knowing how an organization defines "good enough."
I can’t do justice to the quality of McCrory’s content and its context in our current hybrid IT environment, where data is as valuable to long-term viability as water. His presentation was definitely worth the price of admission to Interop ITX, a conference I enjoy for its focus on IT practitioners.
Do you think data has gravity? Do you think data agglomeration will lead to multi-cloud service providers within an organization seeking competitive advantages while implementing hybrid IT strategies? Please share your thoughts in the comment section below.