Preparing for an RPA job interview, as a candidate or hiring manager? Check out these RPA-related questions and guidance on developing strong answers.
Multi-cloud spending: 8 tips to lower costs
Are you overspending with your multi-cloud strategy? IT leaders and multi-cloud experts share their best advice on how to optimize costs – and avoid unnecessary waste when using multiple cloud providers
5. Scrutinize the right applications for multi-cloud
Trying to optimize the costs of running certain workloads in the cloud is a tougher task when those workloads weren’t good fits for a multi-cloud environment in the first place. This is a matter of proper evaluation and planning and is another risk inherent in a catch-all “get to the cloud” strategy.
“First and foremost, careful consideration needs to be given to the applications that are candidates for a multi-cloud deployment,” says Scott Sneddon, senior director and evangelist, multi-cloud solutions, at Juniper Networks. “If the application can’t be broken down into smaller parts, doesn’t lend itself to distribution, and isn’t tolerant to variances in latency and performance, it might not be worthwhile to attempt a multi-cloud deployment in the first place. Cloud-native, microservice-based applications lend themselves to distributed multi-cloud deployments much better than traditional monolithic applications.”
Cohen from Turbonomic finds that migrating the wrong applications to the cloud is particularly common in companies that see (or simply hope) that cloud will heal everything that ails them, from bloated applications and infrastructure to problematic culture and processes.
“We see many organizations that treat the cloud as the cure for everything that is broken in the estate,” Cohen says. “But putting a non-scalable, traditional application in the cloud won’t make it any easier to control. We think of the cloud as elastic, but it is only as elastic as the applications we put in it.”
6. Abstract away from specific platforms
Multi-cloud implementations run the risk of creating “cloud-native” iterations of certain legacy problems that you presumably were hoping would remain, well, legacy problems. And these problems can become expensive.
“There is a great risk of falling into the trap of operational silos as organizations move to multi-cloud deployments,” says Sneddon from Juniper Networks. “Each cloud platform provides its own tools and processes to ease deployment. But these tools are usually specific to each platform and really become a new form of lock-in.”
In addition to silos and lock-in, you could also find yourself brewing a big batch of tool soup. Abstracting operational tasks away from a particular cloud platform can help.
“IT teams need to do everything they can to drive consistency and simplicity across their multi-cloud deployments, and should adopt tools and platforms that allow them to operate their applications in a way that is more abstracted from the infrastructure that they’re running on top of,” Sneddon says. “That way, the application as well as the process to deploy that application can be much more portable. Tools like Kubernetes and Ansible are great for this approach. Additionally, adopting a multi-cloud-capable network and security platform is key and can simplify the process of ensuring compliance across multiple cloud environments.”
7. Dollars and sense: Don’t miss the big picture
Careful attention to detail is a good thing (otherwise, you whiffed on #1 and #2 on this list). But distinguish careful attention from myopia. One symptom to watch for: You obsess over unit costs – down to the penny (or fraction of a penny) with many public cloud SKUs – without looking up to see the big picture.
“One major pitfall to avoid is getting bogged down in multi-cloud’s microeconomics,” says Aden from 2nd Watch. “One cloud provider might have a lower price for a given product or SKU, but once the entire solution is engineered the cost may be greater or performance and reliability may be reduced as compared to another cloud provider. In other words, saving a few bucks today may end up costing you more in the long run.”
Cantor from Park Place Technologies points to “data out” costs – the cost, which sometimes gets less fanfare, of moving your data out of a particular cloud environment, or between environments – as another example of seeing the big picture of your spending. (Cantor also points to this as an example of a cost that’s easier to optimize for in the planning and architecture phase than it is to retrofit later.)
“[Make] sure data out is optimized – all the cloud platforms put a higher price on getting data out,” Cantor says. “The more processing done inside the cloud and not moved among multiple clouds, the lower the data out cost.”
8. Consider making resource optimization a full-time job
Cantor notes that there’s currently lots of “jousting” in the industry around cloud spending. One camp thinks that actual cloud costs are too high relative to on-premises infrastructure. That might be true, in Cantor’s view, if those CIOs were running their internal data centers at 100 percent capacity – but no one does that, he says.
“Cloud gives us the ability to run at 100 percent capacity and know that we can ramp up capacity in minutes without a big capital outlay and implementation, and that’s where the cost comes into line and may even improve over owned infrastructure,” Cantor says. “However, the capability to operate cloud infrastructure at this level of expertise has to be designed in from the start and is a new skill for IT that doesn’t exist today.”
Speaking of costs, not every organization will be ready or able to take this step. As multi-cloud environments proliferate and scale, however, Cantor thinks that resource optimization can’t be something that gets heaped on to an existing team’s plate. It needs someone’s full-time attention and plenty of patience on the part of that person’s boss – because there aren’t many individuals fully equipped to tackle the role yet, according to Cantor.
“Someone has to be in the cloud optimization job,” Cantor says. “This is not a sideline for the infrastructure team, and this is not something that can be trained on yet. It’s a job that has to be filled full-time with the expectation that it’s a learning experience until the industry matures further.”
[ Want to learn more about building cloud-native apps and containers? Get the whitepaper: Principles of container-based application design. ]