Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Understanding the 3 Key Edge Deployment Options

IoT Network
(Image: Pixabay)

As legitimate low-latency, high bandwidth use-cases for edge computing continue to grow within the enterprise, IT leaders and architects are beginning to wonder exactly how an edge computing deployment could fit into the rest of their corporate network infrastructure.

According to market analysis research conducted by Million Insights, the global edge computing market is expected to grow at a compound annual growth rate (CAGR) of 37% through 2027. With IoT finally taking off and an increased need for real-time data collection, processing, and analytics, it's expected that most medium to large enterprises will require the need for delivering compute and storage closer to end users at some point in the future.

When looking at the edge computing landscape, network architects will find a myriad of options and pricing models. Yet, if you boil the offerings down and look at what is being offered from a customer perspective, three distinct models emerge. Let's look at these three edge deployment models and discuss why a company may prefer one model over the others.

Edge computing managed by public cloud providers (CSP)

Most enterprise organizations leverage the public cloud for one reason or another. Thus, the idea of extending a cloud partner’s services and management processes to metropolitan edge locations may sound highly appealing. Cloud service providers (CSPs), including major players such as AWS, Microsoft Azure, and Google Cloud, are beginning to extend their reach by operating regional extensions of their cloud platform to bring computational power closer to end-users. Existing customers that already host apps and data in one of these clouds will find the easiest path to the edge will be through the CSP they already work with and are comfortable operating within.

That said, understand that the number of metropolitan locations that a cloud provider offers today are few and far between. For example, AWS Local Zones edge offering is only available to the public in Los Angeles, with future locations being served out of Boston, Houston, and Miami. While cloud providers will undoubtedly continue expanding their metro edge locations, businesses must be made aware that deployment options will remain limited for the time being.

Edge computing managed by LTE/5G telecommunications carriers

Because of the nature of their business, telecommunications providers have a leg up on CSP’s from a local edge deployment perspective. Often running nationwide networks, carriers are building out small-footprint data centers in strategic locations that can, in turn, be leased to customers for edge computing purposes. Additionally, telecommunications carriers are counting heavily on the thought that a great deal of edge computing will occur on LTE or 5G mobile networks. Thus, the lowest-latency path for mobile users and devices will be to deploy edge services directly within the LTE/5G carrier’s network. This is known as multi-access edge computing (MEC) and is being offered by the major US telecommunications carriers today, including AT&T, T-Mobile, and Verizon.

Carriers are also partnering with CSPs to combine the cloud provider's infrastructure architecture with the carrier’s wider reach into the most popular metropolitan locations. For example, AT&T has partnered with Google Cloud and Microsoft Azure, while Verizon and AWS are paired up to deliver a competitive alternative.

Edge computing hosted on-premises

Finally, cloud providers offer enterprise customers the ability to extend their public cloud footprint by deploying cloud hardware within private data centers or colocations. This is ideal for use-cases where most users and devices that require high bandwidth, low latency services are located within the corporate network.  Some of the more popular private edge offerings available in the US include:

  • AWS Outposts
  • Azure Private Edge Zone and Azure Arc
  • Google Cloud Anthos

The biggest difference between the three CSP offerings is that AWS Outposts requires the customer to deploy AWS-specific hardware into their private data center, while Google’s Anthos allows customers to leverage some or all of their existing data center hardware and software. Microsoft, on the other hand, offers both options – Private Edge Zones for customers that wish to install Microsoft’s purpose-built edge computing platform and Arc for those that want to extend Azure into an infrastructure they already own.

Choosing an edge computing model: location, location, location

Because edge computing is still relatively new, the biggest factor when choosing a deployment option is whether edge services are available in the geographic region or regions where they are needed. Additionally, thought must be put into where future edge locations may be required – and the likelihood that the edge compute provider will have services in those areas. In time, carriers and cloud providers will expand their edge computing reach and get to the point where locations are no longer a factor in the decision-making process. But until that time comes, the choice of which edge computing deployment model works best for the organization largely revolves around whether the provider can deliver services to the locations that matter most.