By Lee Congdon
We all need to be faster these days to succeed in our companies. In IT, that increasingly means the ability to quickly provision technology and take advantage of what the market has to offer without going through a lengthy capital expenditure, server build-out, or server provisioning exercise, which even for the best organizations can still take weeks or months.
IT agility becomes even more critical when it’s clear that users are accessing content and apps more and more on mobile devices. Staying ahead of mobile demand becomes easier when you think about provisioning for a global environment via cloud services as a complement to your data centers. The recipe for agility is easier, as is the fact that you can typically provision your services more flexibly around the globe and get local response times that are designed from the ground up to be exposed and available on the Internet.
If you continue to manage your own infrastructure exclusively, especially if you’re in a cost-restrained environment, the temptation is to buy the equipment, run it for seven or ten years until it’s fully depreciated, and continue to get additional value well beyond the normal life cycle.
The disadvantage of that in many environments is you can’t afford to wait that long. Your competition is very much focused, whether it’s current competition or emerging competition, on offering new capabilities and taking your customers away. So if you’re locked into an aging technology that doesn’t give you the agility and flexibility you need, then even though you’re getting short-term cost benefits, long term you are likely putting your enterprise at a disadvantage. That’s why we recommend an open hybrid cloud architecture.
Upgrading at Competitive Cadence
One of the things that I increasingly observe about cloud technology is that it forces you to be on a fast operating upgrade cadence. If you’re using a software-as-a-service app, the vendor is going to upgrade regularly, so you need to get on that cadence. If you are using infrastructure as a service solutions, or platform as a service solutions, they too are evolving and adding capability so quickly that you’re almost by default on a rapid upgrade cadence. The real advantage is that it’s hard to defer maintenance, and as a consequence you don’t incur nearly as much technology debt as one would using legacy Unix servers and trying to keep those systems alive for 20 years. That’s important because a lot of organizations still think of IT and information-based solutions as a cost. Yet enterprise after enterprise, organization after organization, is being completely redefined by technology solutions.
Not two or three years ago, for example, if you had persistent sleep problems you could go to a sleep center and have instruments applied to you. In the morning they could determine when you woke up, how deep your sleep was, and so on. And a few years later it seems those businesses are gone, replaced by at-home solutions, either consumer solutions like the UP band and the Fitbit, or by much lower cost at-home solutions offered by medical professionals. Technology completely changed the business model in a very short time. My view is that the number of businesses that think they can continue to pursue an industrial-age IT approach without being exposed to someone else innovating them out of business is becoming smaller and smaller.
That brings up another advantage of cloud strategy, which is that you can devote resources to managing your business rather than the necessary evil of managing technology that’s not directly related to your business objectives. You can start to think about technology as competitive advantage rather than a cost.
Cloud as Innovation Source
Does that mean we are going to do away with hardware in IT? Not at all. It’s more that hardware will become even more of a commodity. As that happens, innovations are going to be driven on the cloud side, where you’re renting or buying hardware as a service. I also would say that we’re approaching some early maturity in virtualization of processing, though we haven’t yet really begun virtualizing data or creating software-defined networking. As these solutions evolve, IT professionals will start to think more about their problems in terms of configuring the technology to meet their needs and less about buying servers, buying storage and buying networking gear.
Virtualization of data typically means putting, for want of better words, an abstraction layer between applications and the data. In the case of Red Hat Storage, for example, it enables you to have a place to keep all your data by name and not worry about the devices they’re actually resident on, whether they’re in your data center or you’re renting them over the Internet. Red Hat JBoss Data Virtualization makes data spread across multiple systems in multiple formats appear to be a local database.
Software-defined networking is just that. Instead of buying hardware routers and switches and configuring your network around those, the trend is moving toward commodity, generic white-label hardware that runs the networking, which is configured in software and adapts based on the application requirements for performance, latency, and so on.
All of this new capability re-focuses so much energy and attention from managing and upgrading technology to thinking about and innovating and solving business problems. Which at the end of the day is what IT organizations are here to do.
Lee Congdon is responsible for Red Hat’s global information systems, including the technology strategy, enterprise architecture, information technology governance, solutions delivery, and systems operations supporting the company. His role includes enabling Red Hat’s business through services, such as knowledge management, technology innovation, technology-enabled collaboration, and process improvement. Congdon has more than 25 years of experience as an IT leader.