As the cloud continues to expand, it's taken a lot of forms. From the virtual partitions on mainframes to virtualization, cloud services and mobile technologies, the cloud is actually composed of diverse platforms and approaches. One of the new applications that will be closely related to cloud technologies is the Internet of Things.
The Internet of Things builds on the concept of the original Internet, which was (and still is) composed of networking nodes (servers and computers) all linked together via global networks, and which is used for everything from data transfers and stock quotes to web surfing and media streaming. But the IoT isn't about computer communications in the traditional sense; rather, IoT is about data collection from a much larger range of simple devices or sensors that communicate specific data to a centralized or semi-centralized collection point, and can receive simple commands back from that central source.
Compared to the more traditional networking nodes, sensors have less compute and storage power, but there are many of them -- orders of magnitude more. This raises the age-old question of having the compute and storage resources centralized vs. distributed. What will be needed to support IoT?
The applications for IoT are enormous. In healthcare, a range of sensors attached to any number of patients can transmit data about that patient to a central management console, alerting doctors and nurses when certain conditions are detected. In the home, IoT applications are essentially already in use, where smart appliances like a refrigerator can signal the need for new water filters, and thermostats can both report temperature data and receive commands to adjust the temperature. And in agriculture, things like sensors on cows can report everything from the cow’s body temperature to signs of stress or disease.
With so many potential applications possible, the question arises as to what kind of architecture will be required to support such diverse uses with such a high volume of sensors. In the early days of computing, terminals connected to a mainframe and were dependent on that mainframe to be of any use. Later, the development of the personal computer negated the need for a mainframe. But the advent of the Internet re-introduced the client-server model and the centralized vs. distributed discussion again. And now, cloud services have essentially taken us back to reconsider our architectures.
But for IoT applications, a client-server approach may not be the best model. For example, in many areas, the sensors may be required to make some decisions locally, or communicate with more than one external source based on the data collected. And once self-driving cars come online in a big way, any IoT-type devices onboard those cars will likely need to make some decisions instantly, without waiting to be told what to do by the centralized compute source.
At the same time, a fully distributed model will also fail, as the cost and technical constraints that the sensors have will not allow heavy computing and storage resources to be integrated into the sensors.
Creating the tight network
Also in question is the underlying network architecture that will support IoT applications. To listen to certain wireless carriers, IoT is custom-built for wireless technologies. And, to a certain degree, they may be right. But even with the lower rates these carriers are charging for IoT traffic, that approach may not work. There are many situations where the sensors and devices in question may not be able to receive a consistent wireless signal, or security concerns may compel a given company to keep their IoT traffic segmented and away from wireless networks.
The key is then to intelligently slide between the two extremes, as it fits to the specific IoT application. As an example, a hybrid approach, where a localized IoT wide area network (WAN) gateway has more communication, compute and storage resources may work better as opposed to a pure centralized or a pure distributed approach. Imagine an IoT WAN gateway that bridges the thousands of sensors in a farm to the cloud, as opposed to each sensor trying to connect to the Internet directly. A key criteria is to leverage modern WAN orchestration technologies to support the networking needs of the IoT application.
So, as this new IoT tide begins to build, we as an industry will need to look at the different scenarios, challenges and opportunities that develop over time, and decide how best to provide the network architectures that will best support the Internet of Things. Clearly, cloud-based services are here to stay, and IoT adoption will only increase over time. All of this begs the question: Is the client-server vs. distributed debate settled or just beginning?