IoT Tipping Point: Connection Capacity
For most folks, the focus of any discussion about the internet of things (IoT) tends to devolve to data. That's because all those devices and sensors are generating a lot of what pundits expect will be the digital gold that drives business to be more efficient and profitable in the future.
For NetOps, data means bandwidth, and all that data is going to put a strain on current network capacity. In today’s multi-cloud world, networking pros must consider not only inbound but outbound traffic because a significant percentage of business operations today is conducted via off-premises, cloud-based applications like Office 365, Workday, Dropbox, and Salesforce. IoT adds to the already growing load on the network, which in turn puts pressure on NetOps to ensure that everyone has their fair share of bandwidth.
This is all true, but there’s an aspect of IoT that’s rarely mentioned, let alone considered: connections. You know, those things that must be established before the data is ever transferred.
Consider this research from Nokia Bell Labs that calls out the relationship: “With the advent of IoT, operators will also have to address the need for massive increases in control plane capacity to handle the sporadic transmissions generated from billons of devices. IoT traffic generates a substantially higher volume of signaling traffic relative to data traffic. For example, a typical IoT device may need 2,500 transactions or connections to consume 1 MB of data.”
Yes, you read that right: 2,500 transactions or connections for a single megabyte of data.
That’s one device, by the way. Multiply that by the hundreds and thousands, maybe millions, of devices needing to share or consume 1 MB of data and you’ve got yourself a metric ton of connections that need to be managed. “In the disruptive view, daily network connections due to cellular IoT devices will grow by 16 to 135-fold by 2020 and will represent three times the number of connections initiated by human-generated traffic,” the researchers added.
Now, this research specifically focused on the growing burden IoT devices will place on service providers, but the general impact of the typical signal to data traffic ratios is not peculiar to providers. Most IoT devices have similar patterns of behavior in that they frequently poll and report, and payloads typically fit in a single packet. The bottom line is that IoT is going to put a strain on connection capacity as readily and perhaps even sooner than it does on bandwidth.
As networking pros, you no doubt readily see the problem, because you know that “the network” is a neat little term that means a whole bunch of devices strung together in some (hopefully) logical architecture that delivers data from one end to the other.
So every one of those devices better be able to handle those hundreds, thousands, or millions of connections that must occur to transfer all that data pundits are in a tizzy over. That’s in addition to all the existing connections to apps, inbound and outbound, that need to be supported for all those other business operations to keep moving with alacrity.
Answers to this dilemma range from a network-wide upgrade to ensure every device has the capacity to handle those connections to moving off-premises. While the latter option is one of the better answers to the problem of IoT traffic and data storage, it’s not necessarily the right answer for industrial IoT. Sensors and devices whose warnings and conditions need immediate attention suffer from the latency inherent in shipping data out to the cloud and then back in.
But neither option may help your network support the massive number of connections growing from CCTVs, digital door locks, sensors, monitors, and alarms. The answer to this conundrum is often “edge” or “fog” computing.
Both terms refer to the practice of moving the systems and services responsible for making sense out of IIoT data closer to those little devices. Data might still be shipped out to the cloud by those devices that live in the “fog” between the cloud and the data center, but the data is analyzed first and in very near real-time. Any actions that might be necessary based on the data can be executed immediately because the devices in the “fog” are imbued with the intelligence to make that call, perhaps literally.
This model reduces the number of devices required to support massive connection capacity and simultaneously relieves the core network from needing to handle them. It’s an architectural approach to segmenting traffic combined with intelligent network devices and services that can alleviate the pressure growing on the corporate and application backbones that are just as critical to business success.
Connections are the real “tipping point” for IoT. Data is significant, yes, but it’s got to get there first. If we don’t take the appropriate architectural steps now to ensure the network can handle all the connections that need to be made, that data isn’t going anywhere.
Recommended For You
As companies adopt the latest technologies and networks continue to grow and become more complex, it’s clear automation is no longer a luxury, it’s a necessity.
The big enterprise IT vendors -- IBM, Microsoft, Oracle, and SAP -- all want you to buy more and pay more. Here are some tactics for you to protect your organization and get the best deal as you plan and engage in negotiations with them.
A well-designed DevOps toolchain ensures efficient and reliable code deployment and management throughout the lifecycle.
When your systems slow to a crawl it's easy to blame the technology, but some of the causes -- and fixes --may relate to how enterprises do business these days.
New audio network management technology helps large zoo campus deliver a great experience by simplifying and unifying disparate AV and IT systems.