Edge computing: 4 trends for 2023

Edge computing can be the key to faster, more efficient processes, but involves a complex collection of variables. Consider these factors as you design your edge strategy
1 reader likes this.

Edge computing has emerged as a rational and important use case demonstrating why hybrid cloud architectures usually win out over purely centralized cloud approaches. There are several reasons to adopt edge computing, but most involve moving compute closer to where data is created or consumed.

For example, if latency is important because some local action must be taken in response to an event – perhaps a process has drifted out of allowable parameters on a factory floor – it’s useful not to have to traverse a network to take that action.

Other aspects of network connectivity can also come into play: Bandwidth is a limited resource, and the network may not be 100 percent reliable; you may want your retail store to be able to operate at some level even if the link to headquarters goes down.

Consider regulatory or data management reasons for shipping data off to some central location.

1. There is no single edge

Thus, there are many different edges implemented for different purposes.

Edge servers and gateways may aggregate multiple servers and devices in a distributed location, such as a manufacturing plant. An end-user premises edge might look more like a traditional remote/branch office (ROBO) configuration, often consisting of a rack of blade servers.

Telecommunications providers have their own architectures that break down into a provider far edge, a provider access edge, and a provider aggregation edge. (There are various terms for the names of the different classes of equipment used by telcos; the basic point, however, is that it is a range that isn’t satisfied by one tier of the edge device.)

Finally, even if we talk about a core data center, that needs to be more concise. Many organizations will complement a core data center with regional data centers to better handle redundancy, large data transfers, and data sovereignty requirements.

[ Related read: Edge computing: 3 ways you can use it now. ]

The edge is hybrid!

2. The scale of edge requires automation

Especially as we move out to the very edges of the network, the edge has a very large footprint. There are many devices and often limited (or no) IT staff.

While automation is important to streamline operations at any scale, it’s not hyperbole to say that a successful edge deployment is impossible without automation. Even if it were economically viable to make updates and other changes manually and locally, ensuring consistency – an important theme for any edge architectural discussion – across an edge architecture requires automation.

Fundamentally, automation simplifies edge architectures, which is a good thing for many reasons, especially given the limited IT staff at edge locations. New sites can be deployed more quickly, and there are fewer opportunities for downtime caused by misconfigurations and other errors that automation reduces.

It's not hyperbole to say that a successful edge deployment is impossible without automation.

3. The edge needs Kubernetes, too

Kubernetes is generally associated with clusters of containers. But it has an increasing place on the edge as well. There are a couple of reasons for this.

The first is simply that edge doesn’t necessarily mean small. Telcos, for example, are increasingly replacing dedicated hardware with software-defined infrastructures, such as the radio access network (RAN) that links user equipment to the core network. Applications such as this can benefit from Kubernetes to coordinate the many components of the architecture.

Machine learning is another increasingly common edge application. Training machine learning models typically still happens in a core data center. However, once the model is trained, it can be pushed to the edge where operational data is collected. Doing so can reduce the amount of data that needs to be transmitted back over the network and allows any actions that need to be taken in response to that data to happen locally and quickly.

For example, at the OpenShift Commons Gathering before KubeCon + CloudNativeCon in October, Lockheed Martin talked about how they use edge computing solutions to deploy models trained on a system in Denver onto a small edge computer in their Stalker unmanned aerial system (UAS). This allows the Stalker to use onboard sensors and AI to adapt in real-time to a threat environment

Considering again the consistency and simplicity discussed earlier – if an enterprise runs Kubernetes in their data center, shouldn’t they want to run the same software wherever practical? More standardization means less work dealing with unique platforms on different tiers of their architecture. It also helps to future-proof against the need to increase edge computing capacity later.

While it’s true that some edge nodes will be too constrained to run Kubernetes, a lot of community work is going into variants of Kubernetes that slim down the amount of code that needs to run while providing a consistent management view. It’s an area of active investigation. For a given deployment, the best approach will depend on factors such as how constrained the resources are, the reliability of the network, and availability requirements.

4. Take advantage of established patterns

Edge deployments are bespoke to greater or lesser degrees because they marry IT with operational technology (OT), which is likely specific to a company’s industry, installed industrial equipment, and many other aspects of its established operations. However, even if you can’t buy a single edge platform that meets all your requirements out of the box, many existing technologies can be connected in patterns that you see from deployment to deployment. The details will vary. Overall, architectures can have distinct similarities.

Portfolio architectures, such as the one for RANs described earlier, are an example of documenting patterns seen across multiple successful customer deployments. They’re open source and can be used as a basis for customizations for a particular organization’s needs. They contain overview, logical, and schematic diagrams on each of the technical components as well as other supporting reference material.

Additional portfolio architectures directly relevant to edge computing include industrial edge, which is applicable across several vertical industries, including manufacturing. It shows the routing of sensor data for two purposes: model development in the core data center and live inference in the factory data centers.

Another architecture shows enabling medical imaging diagnostics with edge. This AI/ML example provides a way to increase the efficiency of a medical facility by reducing time spent on routine diagnosis and medical diagnosis of disease, giving medical experts time to focus on difficult cases.

While bringing together IT and OT systems is a challenging task, edge computing is already in wide and productive use. In addition to the portfolio architectures already discussed, there are applications in everything from telco 5G to factory preventative maintenance to in-vehicle fleet operations to asset monitoring and more. Edge can be the key to faster data-driven outcomes, better end-user experiences, and higher application and process resiliency.

[ Discover how priorities are changing. Get the Harvard Business Review Analytic Services report: Maintaining momentum on digital transformation. ]

Gordon Haff is Technology Evangelist at Red Hat where he works on product strategy, writes about trends and technologies, and is a frequent speaker at customer and industry events on topics including DevOps, IoT, cloud computing, containers, and next-generation application architectures.

Comments

nice article
https://www.3ritechnologies.com/course/python-programming-training-in-pune/