“It’s tough to make predictions, especially about the future.”
It’s one of my favorite quotes from baseball legend Yogi Berra, and it seems especially timely these days. After all, who would have guessed a year ago that we’d be where we are today? Who would have planned for it? Who would have budgeted for it? Not many of us, I’m sure.
When we’re facing unpredictable adversity – as we did in 2020 – we need to be nimble so we can rise to these challenges. The best way to be nimble in IT is to not put yourself in a situation where you’re second-guessing your choices.
Whenever possible, keep your options open. By standardizing on an open substrate built on open source technologies, you can actually push out commitments to particular technologies – possibly indefinitely. This openness gives you the power to change as your organization’s needs change.
Hybrid cloud: A practical reality
Another way to be nimble is to ensure your cloud strategy factors where you are today (on-premises) to where you want to be tomorrow (the public cloud). This is known as a hybrid cloud strategy: You have one foot in your own data center and another in the public cloud for years to come.
Hybrid cloud is a reality, like it or not. Back in 2011, the U.S. Federal Cloud Computing Strategy (Cloud First) was admirable, inspirational, and aspirational. But as we’ve learned since then, not every workload is well-suited for the public cloud.
Traditional workloads may have been designed to scale up, with hardware redundancy in mind. But the public cloud is designed to scale out instead of up, and your software should be resilient to hardware failure instead of the other way around.
Almost every organization has some mix of traditional and cloud-native applications – one foot in the data center and the other in the cloud. And until all traditional apps are re-platformed to be cloud-native, the hybrid cloud is likely to remain for decades. The revised 2019 US Federal Cloud Computing Strategy (Cloud Smart) recognizes this practical reality, and so has the industry.
[ Working on hybrid cloud strategy? Get the four-step hybrid cloud strategy checklist. ]
When hybrid cloud meets edge computing
One thing Cloud Smart didn’t address is edge computing, which is becoming more and more popular. You may have heard the term data gravity: Data has a gravitational pull, and performing the computation where massive data sets are is easier, faster, and cheaper than sending petabytes of data to where the computation is.
Instead of being generated in the public cloud, data is generated at the edge, where sensors collect it in massive amounts. It’s not practical to send massive amounts of data to the public cloud for processing. It takes too long and it’s too costly. What you need to do is move the compute to where the data is – the edge.
[ Get a shareable primer: How to explain edge computing in plain English.]
Back in 2016, then-Intel CEO Brian Krzanich said that one autonomous car would generate and consume roughly four TB of data for every hour of driving. Sending this data to a public cloud for processing would be impractical, if not impossible.
Fast-forward to today: The amount of data is greater than ever, making data processing at the edge even more essential. The challenge, however, is that edge devices may have different capabilities than those in the public cloud. This leads to a bifurcation of public cloud and edge development and maintenance methodologies, which limits innovation and nimbleness due to a lack of limited reuse and economies of scale.
To address this complexity and maximize component reuse for future applications, an open hybrid cloud is the way to go: Build on an open substrate that stays the same so everything else can be different. To do this, you need to snap a chalk line in your IT stack: Everything above the line is portable below the chalk line to where you want to be: physical hardware, virtualized infrastructure, private cloud, public cloud, and edge – all five footprints.
Many organizations snap that chalk line at the operating system layer of their stack. This lets them move their workloads to wherever that operating system runs. Other organizations snap the line higher up the stack at the container platform layer, which reduces complexity through even greater standardization. By standardizing at the container platform layer, the cloud team can share containers with the edge team since their platforms are built on the same common standards. This reuse reduces redundant development costs and lets each team develop the best common components and differentiate only where absolutely necessary.
[ Want to learn more about edge and data-intensive applications? Get the details on how to build and manage data-intensive intelligent applications in a hybrid cloud blueprint. ]
"The future ain't what it used to be."
The past year has put IT organizations to the test, and the future will only be more challenging. Your IT organization will likely reach crossroads, forcing you to make difficult decisions that could make or break your future. Typically, when you reach a crossroad, you must choose one direction and not look back – or, as Yogi Berra advised, “When you come to a fork in the road, take it.”
The crossroads you encounter this year will be easier to navigate if you begin your journey with nimbleness in mind. Build on that open substrate – you don’t have to worry about making the right choice as your needs change at a moment’s notice.
As we all learned last year, “It’s tough to make predictions, especially about the future.” Who knows what lies before us? With that in mind, keeping your options open will help you weather any unpredictable storms going forward.
Whatever we thought the future would hold for us in 2021, it definitely ain’t what it used to be.
[ Will your organization thrive in 2021? Learn the four priorities top CIOs are focusing on now. Download the HBR Analytic Services report: IT Leadership in the Next Normal. ]
Subscribe to our weekly newsletter.
Keep up with the latest advice and insights from CIOs and IT leaders.