How to explain edge computing in plain English

What is edge computing? How does it relate to cloud computing? We cut through the confusion and help you explain this term, even to non-technical audiences
955 readers like this.

Edge computing is already in use all around us – from the wearable on your wrist to the computers parsing intersection traffic flow. Other examples include smart utility grid analysis, safety monitoring of oil rigs, streaming video optimization, and drone-enabled crop management.

And those applications appear poised to expand. Today, less than 10 percent of enterprise-generated data is created and processed at the edge, according to Gartner; but by 2025, that will grow to 75 percent, Gartner predicts.

Yet, explaining edge computing to non-technical audiences can be tough – in part, because this type of data processing can take place in any number of ways and in such a variety of settings. At its simplest, edge computing is the practice of capturing, processing, and analyzing data near where it is created.

[ How can automation free up more staff time for innovation? Get the free eBook: Managing IT with Automation. ] 

What is edge computing?

"For edge devices to be smart, they need to process the data they collect, share timely insights and if applicable, take appropriate action.  Edge computing is the science of having the edge devices do this without the need for the data to be transported to another server environment," says Red Hat chief technology strategist E.G. Nadhan. "Put another way, edge computing brings the data and the compute closest to the point of interaction."

“If we think about a hub-and-spoke model, the cloud is the hub and everything on the outside of the spokes is the edge.”

It’s “everything not in the cloud,” says Ryan Martin, principal analyst with ABI Research, “If we think about a hub-and-spoke model, the cloud is the hub and everything on the outside of the spokes is the edge.” This decentralized approach enables organizations to move processes like analytics and decision making closer to where the actual data is produced.

“Put simply, edge computing is data analysis that takes place on a device in real-time,” says Nima Negahban, CTO of Kinetica. “Edge computing is about processing data locally, and cloud computing is about processing data in a data center or public cloud.”

[ Learn more about  how edge fits with hybrid cloud strategy. Get the free eBooks, Hybrid Cloud Strategy for Dummies and Multi-Cloud Portability for Dummies. ]

Edge computing definitions

That eases us into the edge conversation. We also asked other experts to chime in with their particular definitions of edge computing in clear terms to that may prove useful for IT leaders in various discussions - including those with non-technical people.

Rosa Guntrip, senior principal marketing manager, cloud platforms at Red Hat: "Edge computing refers to the concept of bringing computing services closer to service consumers or data sources. Fueled by emerging use cases like IoT, AR/VR, robotics, machine learning, and telco network functions that require service provisioning closer to users, edge computing helps solve the key challenges of bandwidth, latency, resiliency, and data sovereignty. It complements the hybrid computing model where centralized computing can be used for compute intensive workloads while edge computing helps address the requirements of workloads that require processing in near real time." 

Dr. James Stanger, chief technology evangelist at CompTIA: “As the Internet of Things (IoT) connects more and more devices, networks are transitioning from being primarily highways to and from a central location to something akin to a spider’s web of interconnected, intermediate storage and processing devices. Edge computing is the practice of capturing, storing, processing and analyzing data near the client, where the data is generated, instead of in a centralized data-processing warehouse. Hence, the data is stored at intermediate points at the ‘edge’ of the network, rather than always at the central server or data center.”

Richard Villars, Vice President, Data Center and Cloud at IDC: “IT at the edge is about the consumption of IT resources in increasingly ‘smart’ edge locations. These are the urban cores, hospitals, factories, transportation hubs, and a wide range of spaces where we all work, play, and live as well as where we all want to use ‘smart’ things to deliver optimal digital experience.”

Jason Mann, Vice President of IoT at SAS: “Edge computing refers to the engagement and analysis of data at its point of origin with available computing and storage capabilities.”

Nima Negahban, CTO at Kinetica: “Technically, edge computing is a distributed paradigm that enables data to be processed locally across smart objects, mobile phones, and local networks. Instead of sending massive amounts of data generated by devices on the Internet of Things back up to the central cloud for processing – which takes more time, requires more bandwidth, and tends to cost more – data analysis can now take place on a user’s device at the edge of the network.”

[ Want to learn more about implementing edge computing? Read the blog: How to  implement edge infrastructure in a maintainable and scalable way. ]

Ryan Martin, principal analyst at ABI Research: “Edge computing is the idea that processing should happen closer to where data is created, when it makes sense. Edge intelligence refers to the ability to intelligently distribute computing between edge and cloud resource.”

Todd Loeppke, chief architect, at Sungard Availability Services: “Prior to edge computing, data was collected from distributed locations outside the traditional data center. The data was then sent to the data center where it would be processed, meaning either a decision was made based on the data or the value of the data was determined. With the advent of edge computing, decisions can be made at the collection point or at a location physically close to the collection point. This significantly improves the time required to make a decision based on the data, which is critical for many use cases that utilize real-time decisions, such as autonomous cars communicating with each other.”

Edge vs. cloud: How to explain

Think of edge as an extension of the cloud rather than a replacement, says Seth Robinson, senior director of technology analysis at technology association CompTIA. In fact, edge is a key enabler for unlocking the full power of data in the cloud. Data from various connected devices in the IoT ecosystem are collected in a local device, analyzed at the network, and then transferred to the central data center or cloud, says says Manali Bhaumik, lead analyst at technology research and advisory firm ISG.

However, “to harness the combination of cloud and edge computing solutions, workloads must be containerized and distributed across multiple cloud, as-a-service, edge servers, and edge devices,” says Craig Wright, managing director with management consultancy Pace Harmon.

Bonus tip: The edge computing pizza place analogy

Sometimes, nothing works like a food analogy. Try this one out:

"A pie baked at the main location would get cold on its way to a distant customer.”

Michael Clegg, Vice President and General Manager of IoT and Embedded at Supermicro: “By processing incoming data at the edge, less information needs to be sent to the cloud and back. This also significantly reduces processing latency. A good analogy would be a popular pizza restaurant that opens smaller branches in more neighborhoods, since a pie baked at the main location would get cold on its way to a distant customer.”

So how do you know when you need to build those neighborhood pizza joints, er, use edge computing? Let's look at some examples:

Stephanie Overby is an award-winning reporter and editor with more than twenty years of professional journalism experience. For the last decade, her work has focused on the intersection of business and technology. She lives in Boston, Mass.