5 ways cloud storage and data services enable the future of development in the AI age

Expecting traditional storage and data constructs to deliver the portability, scale, and speed that cloud-native applications demand is sure to disappoint
270 readers like this.
CIO Big Data

Why is data - the crown jewel of business assets - so hard to manage, secure, and monetize despite the focus it gets from customers and vendors alike?

The answer may lie in the question.

In speaking with Fortune 500 CIOs, I’ve come to realize that most attempts to extract timely business insights from data have underpinnings in the way we’ve dealt with data historically. Innovation around cloud storage and data services can drive business value as Artificial Intelligence and Machine Learning (AI/ML) gain mainstream adoption across the globe.

[ How can automation free up more staff time for innovation? Get the free eBook: Managing IT with Automation. ]

1. Humans to machines

Back in the day, the end goal of business intelligence and data management software was to cough up human-readable insights. Precision was valued over context. Completeness over timeliness.

Fast forward to today’s digital world driven by AI and ML. Algorithms consume data insights and turn them into actions, only a fraction of which are actions meant for humans. Data flows in and out at will – in various forms and at astonishing rates.

How can we expect a human-intensive data mindset to stay relevant in a machine-driven world?

2. Applications and data - two sides of the same coin

Application development has undergone a complete overhaul in the new millennium. Agile processes have given developers the luxury to fail fast, iterate often, and deliver in continuous increments. DevOps tooling has shrunk development workflows and improved software quality.

Many AI/ML engineers and data scientists will attest to the fact that, while building applications has gotten easier, managing the large and varied stores of data that applications breathe through has gotten out of hand. In particular, data acquisition and preparation has begun to take on the appeal of a root canal without novocaine.

The rapid rise of containers and hybrid cloud have further exacerbated the frustration of data stakeholders who struggle to find a balance between enabling greater innovation for developers and making data more accessible yet secure.

There is no one right answer to this conundrum. However, there is much evidence to suggest that the most successful enterprises treat the application and data modernization challenge as two facets of the same challenge, rather than leaving data modernization for later.

[ Evaluating hybrid cloud options? Get the checklist: 5 reasons you need persistent hybrid cloud storage. ]

3. Cloud-native data services for cloud-native workloads

Some enterprises fail to fully capitalize on their investment in cloud-native development methodologies and technology because outdated data and storage stacks hold them back. Expecting traditional storage and data constructs to deliver the portability, scale, and speed that cloud-native applications demand is sure to disappoint.

The good news is it doesn’t have to be this hard. The key is to unlock the power of data in new and important ways, while making data accessible, resilient, and actionable to applications across the open hybrid cloud.

Cloud-native data services create an open hybrid cloud application environment with easy-to-use services for intelligently moving, storing, transforming, responding to, and learning from enterprise data. The modern CIO is well served to work with a trusted adviser who can deliver on the promise of cloud-native data services.

[ Read also: Data Services for the open hybrid cloud deliver on the promise of cloud-native infrastructure. ]

4. Agility and scale redefined

As the industry moves toward infrastructure-as-code, business leaders need greater agility, scale, and consistency from IT infrastructure. Traditional storage vendors have either had to reinvent themselves or risk extinction. The demands of agility and scale from IT infrastructure continue to rise and evolve in the new era of intelligent applications and agile development workflows.

Business and IT leaders may find it useful to think through these challenges through the lens of data at rest, data in motion, and data in action - to reflect modern data pipelines in the era of Kubernetes, hybrid cloud, and real-time developer workflows.

5. AI/ML built to survive market shocks

Events like COVID can throw AI-driven supply chain algorithms into a tailspin since such events can lie significantly outside of the training data sets. As a result, data engineers are widening the aperture of training data sets to include market shocks in the future. There is widespread agreement among business and political leaders that data may help lead the path out of the COVID pandemic, and truly transform us for the better.

The value of data is hard to overstate. It is coveted by hackers (always a data breach, never an application logic breach). It is sought after by every public cloud vendor since data stickiness drives platform stickiness.

Cloud-native data services enable modern enterprises to separate signal from noise and unlock the potential of their data, in the age of AI.

[ Get the free O'Reilly eBooks: Kubernetes Operators: Automating the Container Orchestration Platform and Kubernetes patterns for designing cloud-native apps. ]

Mike Piech serves as the Vice President and General Manager, Cloud Storage & Data Services of Red Hat. Previously, he led the middleware business unit at Red Hat. Mike started at Red Hat in March of 2013. He currently resides in the San Francisco Bay Area.