Tackling Edge Computing Challenges
There's a lot of hype about edge computing, but processing data at the edge presents some tough problems. Some solutions are emerging.
April 26, 2018
Edge computing is a form of computing where the processing occurs close to the source of activity and data. Working close to the edge reduces the latency of transporting data from the source to the processing units, and is ideal for uses cases that require rapid responses, such as the internet of things. The concept of edge computing is complementary to cloud computing, which is typically centralized processing residing far from the source of data. In edge-based systems, which some call the “near cloud,” the goal is to extend the boundary of the cloud to be closer to the edge.
It’s easy to think edge computing magically solves many problems that cloud computing can’t, but there’s a trade-off due to the highly distributed nature of edge systems. Each of the edge nodes are not completely independent, as each may need to share information with other nodes, and keeping data consistent is a challenge. The question is: How do I coordinate a large number of edge computing systems while still allowing them to work independently? This is a problem that has perplexed designers of distributed systems for many years. People call this the distribution, consistency, and synchronization problem.
The number of edge computing systems will be high, so any solution will need to scale greatly. Altogether, this is a big problem to solve.
Edge data
Except for some very specialized workloads that simply process events and upload data, many applications processed at the edge need to share security, customer, and other contextual information. What kind of apps need to do this? IoT apps, gaming, advertising, virtual or augmented reality, and mobile apps are good examples.
Some people refer to the idea of data gravity. The relevant part of any application workload is the data, so applications are designed and deployed so that access to the data does not encounter bottlenecks; it works best if the programs and data reside near each other. If there is a need to maintain a central shared database, then the programs will need to reside close to the central database, which is not what we want for decentralized edge computing.
edge.jpg
Another challenge is that the computers at the edge are not high powered. It’s hard to imagine running a traditional database or application server, where licensing and processing costs may be high, at the edge. Lightweight computing, such as containers or serverless systems, are more appropriate for the edge.
Emerging solutions
To solve these problems, some vendors are introducing lightweight systems that enable edge-based computing and storage. For example, AWS Greengrass is a system that processes data locally using serverless computing and locally cached data, but relies on the cloud for long-term storage.
Startups such as Kuhiro take this one step further by creating an illusion that there is a single logical consistent database while each edge node works on its local copy.
While cloud providers have developed edge solutions for internal use, enterprises may soon realize they need to solve similar problems as IoT applications become more widespread. In addition, some applications run on mobile devices and may require processing at the edge, such as close to cell phone base stations.
The problems related to edge computing and edge data seem very complex, but IT pros should step back and examine emerging solutions. It's worthwhile preparing today so that the right deployment decisions can be made tomorrow.
Learn more about edge computing and how enterprise networks must adapt to meet the needs of today’s technology-centric businesses. Attend the Network Transformation Summit at Interop ITX, April 30-May 4, 2018. Register now!
About the Author
You May Also Like