Edge computing: 4 pillars for CIOs and IT leaders

IT leaders share insights on what you should keep in mind when designing your organization’s edge strategy
55 readers like this.
Two employees analyzing data and reports

If it seems like the IT industry has been talking about edge computing for years now, well, that’s because it has – and the same goes for IoT. But in practice, most organizations are just now translating that talk into action.

More and more CIOs and other IT leaders are now taking the reins on developing an edge strategy. In Red Hat’s Global Tech Outlook 2022, 61% of IT leaders reported that they are planning to run IoT, edge, or both technologies in the next 12 months. When combined as a single category, the two outpace AI/ML (53 percent) as the top area for emerging IT workloads this year.

For many organizations, edge computing is a natural expansion of their maturing cloud strategy and architecture – especially (but certainly not limited to) hybrid cloud environments.

“Edge computing complements what cloud computing does for a company’s compute plans – the two work together,” says Ron Howell, managing enterprise network architect, Capgemini Americas.

While edge use cases are numerous (and still emerging), one way to think about its relationship with cloud is that it can pick up where a cloud or centralized datacenter leaves off – especially as endpoints, applications, and data become distributed to infinity and beyond.

[ Related read: Edge computing and IoT: How they fit together. ]

“Secure connectivity is the goal of good network design, protecting all company assets, and industry leaders are coming to know that not all IT and business network requirements can be solved using only a cloud-centered enterprise architecture,” Howell says. “Cloud compute services will be enhanced and complemented with the use of edge compute, edge security, and the right network.”

4 essentials for your edge computing strategy

With that in mind, we asked Howell and other IT leaders and edge experts to share some of the essential concerns of an edge computing strategy. Here are four fundamentals to bear in mind in your own planning.

1. Edge should solve "good problems"

As with any major technology implementation, an edge computing strategy should be firmly grounded in a business case – what problems will an edge deployment solve that you can’t solve (at least not as well) in your cloud or datacenter environments?

“One of the most important things out of the gate is fully understanding the characteristics that make a problem good to solve through edge computing,” says Jeremy Linden, senior director of product management at Asimily.

It’s the tool-job pairing – I could use a blowtorch to light a backyard barbecue grill, but the box of matches does that job just fine.

“Good problems” for edge computing commonly fall into one of several categories, with latency being at the top of the list. (For a deeper dive on edge computing and latency, check out Gordon Haff’s recent article on the subject.)

“These problems tend to either require lower latency than would be feasible through a more centrally structured architecture; are in environments where network connectivity is unreliable or slow; or are very data-intensive problems that would consume a large amount of throughput to transfer data back and forth,” Linden says. “In these situations, there can be significant benefits to processing data closer to the source.”

And that’s the fundamental question to be asking: Where will our organization benefit from bringing compute/processing power closer to where data is generated and/or used? When asked broadly like that, the list of potential answers is long indeed – and a good starting point for an edge strategy before moving into the practical aspects of execution.

Speaking of which…

2. Automation and centralized management will be key

By definition, edge computing sort of takes the notion of a centralized IT network environment and shatters it into hundreds or even thousands (or more) of smaller environments. Picture the classic image of a room full of servers, but now every server on every rack sits in its own room – or in many cases no room at all, but on an oil rig or manufacturing floor or cell tower.

Almost regardless of your edge use cases, it’s going to entail moving lots of the stuff that has long been the domain of IT – infrastructure/compute, devices, applications, data – away from your IT environment, however that’s currently defined. Properly managing all of that stuff requires some forethought.

“You’re probably going to have a lot of devices out on the edge and there probably isn’t much in the way of local IT staff there,” says Gordon Haff, technology evangelist, Red Hat. “So automation and management are essential for tasks like mass configuration, taking actions in response to events, and centralized application updates.”

[ Learn how leaders are embracing enterprise-wide IT automation: Taking the lead on IT Automation. ]

It’s similar, at least in concept, to other ways in which highly concentrated, centralized IT approaches – say, running a monolithic application in an on-premises datacenter – and being augmented or replaced by far more distributed approaches, such as running microservices-based applications in containers across multiple clouds.

Just as automation (such as configuration management and Kubernetes) quickly became critical for managing containerized applications – especially across multiple environments – it will play a key role in edge management.

3. Standardization and consistency are also your friends

The comparison to containerization carries over to a related but separate essential: standardization.

“A corollary to this requirement is maximizing consistency from the datacenter out to the edge,” Haff says. Deploying and operating large-scale distributed infrastructures is challenging enough without throwing randomness and silos into the mix.”

In fact, the comparison edge and containers has a specific potential link – organizations that are already running Kubernetes in their datacenter and/or clouds are increasingly using it (although perhaps in a lighter-weight version) out to the edge for consistency.

You don't want to be dealing with a bunch of one-offs or snowflake patterns in a large-scale edge architecture.

Standardized operating system configurations and cluster orchestration are your friends. For example, organizations running Kubernetes in their datacenter are increasingly running smaller footprint versions of Kubernetes as a single node edge cluster for applications like Telco Radio Access Networks (RAN) or in-vehicle field operations.

Bottom line: You don’t want to be dealing with a bunch of one-offs or snowflake patterns in a large-scale edge architecture.

4. Don't forget about monitoring and observability

If you view edge as another form of (highly) distributed computing, then you should have seen this one coming: “In a single word, it’s about ‘visibility,’” says Howell, the enterprise network architect from Capgemini Americas.

Indeed, monitoring and observability – some more familiar names from the cloud-native universe – join automation and standardization on the edge computing VIP list. You’re absolutely going to need the ability to monitor and measure (the latter increasingly referred to as observability in IT) what’s happening in your edge environments.

“[Edge computing] creates more points of failure – both on the edge compute devices themselves and the network paths between the edge and cloud,” Linden of Asimily says. “It’s essential to set up monitoring on each point where failure can occur and ensure that alerts will direct your teams to the actual point of failure by running tests that simulate potential downtime or other issues.”

In addition to ensuring reliability and resiliency, Howell notes that visibility is also essential to your edge security strategy – a topic of its own that we’ll be diving into in several upcoming articles.

In the meantime, keep in mind that visibility – and the ability to measure what you’re seeing – is not only about system health but about ensuring that your edge deployments are meeting their intended goals. This brings us back to #1 – if you’re using edge to solve performance problems, for example, you of course need to be able to measure performance out at the edge.

Howell sees a significant opportunity for edge computing to strengthen enterprise networks – not vice versa. The combination of well-designed hybrid cloud and edge computing services – with security woven throughout – will be “the secure enterprise architecture of the future,” he says. But that’s only possible with visibility/observability, and so it needs to be embedded in edge strategy early on.

“Real-time, proactive measurements of applications, network, and security have come to the forefront of modern IT infrastructure value,” Howell says. “This measurements-based observability provides the insight we need to improve network-based enterprises. When we can measure performance proactively, we can more effectively optimize and improve IT performance.”

[ Discover how priorities are changing. Get the Harvard Business Review Analytic Services report: Maintaining momentum on digital transformation. ]

Kevin Casey writes about technology and business for a variety of publications. He won an Azbee Award, given by the American Society of Business Publication Editors, for his InformationWeek.com story, "Are You Too Old For IT?" He's a former community choice honoree in the Small Business Influencer Awards.