Edge computing and AI: 7 things to know

How do edge computing and artificial intelligence (AI) work together? Why does edge fit well with AI? What are some use cases? Let's examine what IT leaders should know
571 readers like this.

For decades,  artificial intelligence (AI) lived in data centers, where there was sufficient compute power to perform processor-demanding cognitive tasks. In time, AI made its way into software, where predictive algorithms changed the nature of how these systems support the business. Now AI has moved to the outer edges of networks.

“Edge AI happens when AI techniques are embedded in Internet of Things ( IoT) endpoints, gateways, and other devices at the point of use,” explains Jason Mann, vice president of IoT at SAS. 

“Put another way, edge computing brings the data and the compute closest to the point of interaction,” says Red Hat chief technology strategist E.G. Nadhan. Edge AI is a very real (and rapidly expanding) phenomenon, powering everything from smartphones and smart speakers to automotive sensors and security cameras.

AI is “the most common workload” in edge computing.

In fact, says Dave McCarthy, research director within IDC’s worldwide infrastructure practice focusing on edge strategies, AI is “the most common workload” in edge computing. “As IoT implementations have matured,” he adds, “there has been an increased interest in applying AI at the point of generation for real-time event detection.” 

Deloitte predicts that more than 750 million edge AI chips (specifically designed to perform or accelerate on-device machine learning) will be sold this year, with the enterprise market growing faster than its consumer counterpart at a compound annual growth rate of 50 percent over the next four years.

Enterprises will spend an average of 30 percent of their IT budgets on edge cloud computing over the next three years, according to “Strategies for Success at the Edge, 2019,” a report by Analysys Mason.

[ Why does edge computing matter to IT leaders – and what’s next? Learn more about Red Hat’s point of view. ]

As IT leaders consider where edge AI might fit into their own enterprise technology roadmap, here are some things we know now:

1. It's important to begin at the beginning

If you haven’t already implemented an edge solution, you can’t leapfrog to edge AI. “The first step for most IT leaders today is in constructing a solution architecture that leverages edge computing in conjunction with a cloud backend,” says Seth Robinson, senior director of technology analysis, at CompTIA. “Moving forward, integrating AI will be a critical step in managing the scale of edge solutions and building competitive advantage.”

2. Edge AI can address limitations of cloud-based AI

Every time you ask Siri or Alexa or Google a question, for example, your voice recording is sent to an edge network.

Latency, security, cost, bandwidth, and privacy are some of the issues associated with machine- or deep-learning tasks that edge AI – closer to data sources – can mitigate. Every time you ask Siri or Alexa or Google a question, for example, your voice recording is sent to an edge network where Google, Apple, or Amazon uses AI to translate voice to text to enable a command processor to generate an answer.

Without the edge, waiting seconds for a response would be commonplace. “The edge network allows for a pleasant user experience within the Doherty Threshold (less than 400 milliseconds),” says Stephen Blum, CTO and co-founder at PubNub. “Google, Apple, and Amazon have spent millions investing in their edge so that their AI can answer you quickly. To compete with the giants, the business needs to invest in edge AI.”

[ Get a shareable primer: How to explain edge computing in plain English.] 

3. Only a portion of AI workflow happens at edge today

The AI models themselves are usually trained in a central data center or cloud infrastructure using historical data sets.

“AI edge processing today is focused on moving the inference part of the AI workflow to the device,” Omdia analysts explain in their Artificial Intelligence for Edge Devices report. The AI models themselves are usually trained in a central data center or cloud infrastructure using historical data sets, IDC’s McCarthy explains. Then those AI models can be deployed to the edge for local inferencing against current data.

“Essentially, companies can train in one environment and execute in another,” says Mann of SAS. “The vast volumes of data and compute power required to train machine learning is a perfect fit for cloud, while inference or running the trained models on new data is a perfect fit for execution at the edge.” Model compression techniques that “enable squeezing large AI models into small hardware form factors” could push some training to the edge over time, notes Omdia.

[ Read also: How big data and AI work together. ]

4. Real-time learning at the edge will take time

“Real-time learning allows the AI to continuously evolve and improve during each interaction. For the AI to learn in real time, the matrix (AI brain) must allow training while also answering to your requests. Additionally, data learned must be synchronized with peering edges,” says Blum of PubNub. “This logistical challenge has led most networks to exclude real-time learning.”

When those challenges are overcome, however, that will open up the door to even more advanced Edge AI applications, Blum says.

Let’s look at three more important factors:

Stephanie Overby is an award-winning reporter and editor with more than twenty years of professional journalism experience. For the last decade, her work has focused on the intersection of business and technology. She lives in Boston, Mass.