Networking In The IoT Age: The Cloud Latency Factor
Many IoT sensors can't tolerate latency issues associated with cloud, creating new challenges for networking.
September 14, 2016
Back at the turn of the century (heh, I love saying that) my better half was tasked with migrating the local utility’s meter reading capabilities from manual to mechanical. The hundreds of thousands of network-enabled meters deployed in the effort would be counted as part of the Internet of Things today. But back then we didn’t use such fancy terms; we just called it automated.
Still, the concept that remote sensors could collect data and automatically report it back to a centralized location for processing is one that’s as familiar today as the notion of remote computing, e.g. cloud computing. What’s changed today is the extensive use of sensors in both industrial and consumer markets, and our growing reliance on the data they provide to make decisions for us.
Now, my Fitbit doesn’t make decisions for me, unless it’s to be an annoying reminder that I’m not walking enough, but there are plenty of cases where things do make decisions. And some are increasingly life-and-death decisions, such as shutting down part of an assembly line when one of the components is about to overheat and possibly malfunction, or worse, explode. A variety of systems are increasingly making decisions based on data collected from the Internet of Things, whether industrial or consumer, and some of them rely on nearly instantaneous feedback. Delays could realistically mean the difference between life and death.
That’s particularly problematic due to the seemingly natural relationship between cloud and IoT. Lots of data needs to be processed at massive scale. What better way to do that than in a cloud environment? The problem, it turns out, is one familiar to network folks. Yes, it’s your old performance nemesis: latency.
Latency in cloud computing environments is nothing new. Services like CloudHarmony and providers like Cedexis actively measure latency in cloud environments across the globe for a variety of services, including DNS, compute, storage, and CDN. Latency is an important factor in any application performance, whether that app is controlling a thing, a thermostat, a car, or helping me track down that elusive Pikachu. It’s just that sensitivity to latency in cloud computing environments is becoming heightened as a result of things that rely upon responsiveness. These things cannot tolerate delays that people wouldn't notice.
sky (1).jpg
Latency has always been a known issue, but one that could be generally addressed with the traditional tricks of the trade: tweak the TCP stack, use a CDN, turn on compression, and use geolocation when possible. For apps and users, that’s worked well. Latency in the cloud still exists, but we know how to work around it. The right set of app services can offset network latency and provide the application experience users demand.
With IoT, and in particular industrial and health uses where sensors are often relied upon as a first red flag for problems, those kind of solutions aren't enough. At least that’s what I’m hearing from customers who are trying to deal with the problem of latency as it relates to the Internet of Things. And that means latency in general, whether in the cloud or in the data center, is something we’re going to have to find solutions for again.
One of the ways organizations are addressing the latency problem is by abandoning HTTP for MQ Telemetry Transport Protocol (MQTT) and other, IoT-favoring protocols. The advantages of these emerging protocols is that they are designed to be data centric, rather than document centric. HTTP, even with its recent 2.0 overhaul, is still a very document-centric protocol intended to display information for human consumption. Conversely, protocols like MQTT are designed to transfer very compact bits of data with far less overhead.
But simply changing protocols doesn’t necessarily solve everything, and as is often the case, there will be an impact to networking. We'll need to learn how to best transport MQTT, and secure and scale it. This isn’t just a “HTML to XML to JSON” over HTTP transition; these are completely different protocols, riding atop TCP, that need to be supported in the network, even if that network is in the cloud. We’re going to need to learn to speak topics instead of URIs; though both are hierarchical in nature they're not the same thing. And over TCP, MQTT requires clients to hold a connection to its server-side broker open at all times. Scalability is definitely going to be a challenge.
Networking in the age of IoT is only going to be more critical, whether in the cloud or in the data center, not just to the business selling the latest Internet-enabled gadget, but to the increasing number of people whose safety and health rely on the ability of things to swiftly and reliably share data across the network.
About the Author
You May Also Like