The dynamic nature of hybrid cloud requires a corresponding willingness to regularly review and update your strategy and its execution. Consider this practical advice on smart moves
Edge computing vs. cloud computing: What's the difference?
Is edge computing just new branding for a type of cloud computing, or is it something truly new? Let's examine how the edge approach works, where edge makes sense, and how edge and cloud will coexist
Doing enterprise computing in the cloud is no longer a novel concept. But what about the edge? You’re hearing more about edge computing, often in the same breath as talk about 5G and Internet of Things (IoT). So is edge computing just new branding for a type of cloud computing, or is it something truly new? As is often true on the frontiers of technology, there is room for debate about the answers to these questions and the precise definition of what does and does not qualify as edge computing.
“Edge computing can apply to anything that involves placing service provisioning, data, and intelligence closer to users and devices,” says Red Hat technology evangelist Gordon Haff. Put another way: The term “edge computing” covers a bit too much territory.
The basic idea: Place computing resources closer to the user or the device, at the “edge” of the network, rather than in a hyperscale cloud data center that might be many miles away in the “core” of the network. The edge approach emphasizes reducing latency and providing more processing of data close to the source.
Mobile apps working with the edge network could make greater use of artificial intelligence and machine learning algorithms if they didn’t have to rely entirely on their own processors – or drain phone batteries with intense computation. Other frequently mentioned applications include edge computing for autonomous cars, augmented reality, industrial automation, predictive maintenance, and video monitoring.
[ Why does edge computing matter to IT leaders – and what's next? Learn more about Red Hat's point of view. ]
Where edge computing performance matters
Consider the case of autonomous cars and the networked systems to support them. Distributing navigation software updates to vehicles overnight, as Tesla and others already do, is a fine application for cloud computing. At the other extreme, the decision of whether to veer left or right to avoid a pedestrian running across the street probably needs to be made independently by an onboard computer – it’s certainly no time to wait for a server in a remote data center to respond.
In between, networked traffic systems could play a productive role at the edge of the network, which might mean computing nodes placed in traffic lights and cell towers. For example, if a driver is speeding the wrong way down the highway toward you, telling your car which way to veer off the road (without colliding instead with the broken-down car on the shoulder) is something you would like to happen in a very small number of milliseconds.
If our hypothetical autonomous vehicle traffic system operates over 5G mobile networks, the bandwidth and low latency of that networking technology would speed connectivity to vehicles and roadside sensors. The question is, once the signal reaches the nearest mobile network node, where does it go from there? In a life-or-death application, you would like to have the data processed right there as the roadside – or as close to that as possible – so the collision avoidance message can make it to your car while there is still a possibility of saving your life.
“From the cell tower, the signal can go over fiber, which is lightning fast, but there’s latency still at the speed of light, and if you have to communicate with a data center 2,000 miles away, that’s going to be a large latency,” says Adam Drobot, chairman of the board at OpenTechWorks Inc. and a member of the FCC Technology Advisory Committee. He is listed as one of the coauthors on an FCC white paper on 5G, edge computing, and IoT.
“Things that require real-time performance are going to tend to be done at the edge,” Drobot says.
Edge computing will take its place in a spectrum of computing and communications technologies as another place where system architects can place computing workloads — neither on-premises nor in the cloud but at the edge, Drobot says.
Mobile phone companies are particularly excited about the opportunity to play a bigger role in the future of computing, Drobot adds. “They’ve all got dog houses at every cell tower,” he notes — using industry jargon for telecom equipment shacks — “and they’re drooling over this.” That is, each of those dog houses could become a mini-data center providing edge services.
Telecom companies that have been talking up the potential of 5G also need edge computing for their own purposes, says Dalia Adib, principal consultant and leader of the edge computing practice at STL Partners. “The latency targets they have for 5G you almost can’t get without edge,” she says, adding that the two technologies are interdependent and will need each other to reach maturity.
To visualize a manufacturing example for edge, look at a Siemens trade show presentation on how edge could be applied to predictive maintenance on the shop floor. The presenters use the example of a robot responsible for catching electronics components as they come off the assembly line and placing them in packaging for shipment. If the robot breaks down, products fall on the floor and the line has to be stopped for emergency maintenance. An edge device with greater computing power than is present on the individual robots can gather sensor data from them, which is used to predict when a component such as the robot’s suction gripper is about to fail and proactively schedule maintenance before expensive mistakes happen. The app on the edge device feeds data into even more powerful machine learning systems in the cloud, which improve the predictive algorithms and feed updates to the edge device.
[ Get a shareable primer: How to explain edge computing in plain English.]
Edge doesn't replace public clouds
Some people argued in the early days of cloud computing that the public cloud compute utility would subsume other forms of computing, much as electric utilities did, Red Hat’s Haff notes.
“However, a centralized utility for IT was never a realistic expectation,” Haff says. “Even public clouds themselves have evolved to offer provider-specific differentiated services, rather than competing as commoditized utilities. But more generally, edge computing is a recognition that enterprise computing is heterogeneous and doesn’t lend itself to limited and simplistic patterns.
“Edge computing only replaces public clouds in a world where public clouds were going to otherwise capture all workloads. And that world isn’t this one,” Haff says.
[ Read also: Edge computing: 4 common misconceptions, explained, by Gordon Haff. ]
Some of the same container technologies that have become important for moving workloads between enterprise systems and the cloud will be employed for distributing computing to edge locations.
Systems for metering usage and billing customers may be proprietary, but one lesson learned from the cloud computing era is that the most successful computing services operations are those that embrace open software and standards, Drobot says.
Adib of STL Partners says many of the companies she advises in the manufacturing, oil and gas, and telecommunications industries feel they have been burned in the past by proprietary legacy technologies. “They’re trying not to replicate the mistakes of the past,” she says. “Also, they don’t yet know who to work with…and they don’t want to be locked into any application or system that they can’t rip out if they want to.”
Real-time performance is one of the main reasons for using an edge computing architecture, but not the only one. Edge computing can also help prevent overloading network backbones by processing more data locally and sending to the cloud only data that needs to go to the cloud. There could also be security, privacy, and data sovereignty advantages to keeping more data close to the source rather than shipping it to a centralized location.
There are plenty of challenges ahead for edge computing, however. A recent Gartner report, How to Overcome Four Major Challenges in Edge Computing, suggests “through 2022, 50 percent of edge computing solutions that worked as proofs of concept (POCs) will fail to scale for production use.” Those who pursue the promise of edge computing need to be prepared to tackle all the usual issues associated with technologies that still need to prove themselves – best practices for edge system management, governance, integration, and so on have yet to be defined.
The edge and the cloud coexist
No one is going to abandon the cloud in favor of the edge. As the FCC white paper puts it, “Many industry experts are pushing back on the notion that cloud and edge computing are in competition with each other. Instead, forward-looking organizations, and even many public cloud service providers, are beginning to consider how to selectively employ both.”
In other words, functions best handled by the computing split between the end device and local network resources will be done at the edge, while big data applications that benefit from aggregating data from everywhere and running it through analytics and machine learning algorithms running economically in hyperscale data centers will stay in the cloud. And the system architects who learn to use all these options to the best advantage of the overall system will be heroes.
“I think it’s going to be very rare that an application will live only in edge computing,” Adib says. “It’s going to need to communicate and interact with other workloads that are in the cloud or in an enterprise data center or on another device.”
[ Want to learn more about implementing edge computing? Read the blog: How to implement edge infrastructure in a maintainable and scalable way. ]