Beth Israel Deaconess Medical Center (BIDMC), one of the largest hospitals in New England, had a problem. It was running short on operating room capacity, and building more ORs would take time, money, and staff. With over 20 years of historical data on OR activity in the hospital, CIO John Halamka knew they could tap into the power of AI to create more efficiency with their existing ORs. But they had one major hurdle to overcome first: They had to refine their data and make it useable by an AI model.
BIDMC isn’t alone. New research from Harvard Business Review Analytic Services uncovered several data hurdles that companies across industries had to tackle before experimenting with AI – from how to tap into unstructured data, to data storage capabilities, accessibility, and security.
[ Want lessons learned from CIOs applying AI? Get the full HBR Analytic Services report, An Executive's Guide to Real-World AI. ]
When AI meets APIs and microservices
The report found that, “companies that have been undergoing a digital transformation, moving to a platform or microservices architecture, have an advantage.”
For example, Caesars Entertainment spent the last three years moving all of its major systems and business functions to "cloud-based platform architecture that allows us to unlock data,” says Caesars Entertainment CIO Les Ottolenghi. “This has enabled us to experiment with AI in an effective manner. Managing the complexity of the data rationalization, the models that you create, and then the learning that is done … requires all of that to happen first.”
The use of application programming interfaces (APIs) and microservices makes it possible for organizations to leverage all the data at the company’s disposal. At Bayer Crop Science, “we have every type of data storage under the sun,” says Jim Swanson, Bayer Crop Science’s chief information officer and head of digital transformation. “We have relational databases as well as open NoSQL databases, graph databases, and open source technologies like HBase and Cassandra.” APIs and microservices provide access to that data, which “allows it to be democratized and accessible, while simultaneously ensuring data security.”
Companies that want to tap into the ongoing stream of unstructured data that comes from customer behavior, operational systems, internet of things (IoT) sensors, and more are moving beyond traditional ways of storing data and building data lakes and data ingestion services. “This is not, oh, let’s create a data warehouse,” says Equifax CTO Bryson Koehler. “Operational systems don’t move onto a data warehouse, but operational systems can move onto a data lake. You need to get that right first.”
Successful AI starts with high-quality data
Examples like these and others in the report underscore the urgency for companies to get started with AI now if they haven’t already. Having the right data infrastructure as well as sufficient quantity and quality of data is one of the foundational pieces companies must have in place to succeed, according to the report.
The report, “An executive's guide to real-world AI,” includes interviews and case studies from AI trailblazers at companies like Adobe, 7-Eleven, Bayer Crop Science, Capital One, Discover, and more who are rewriting the rules on refining data for AI. Download for their ideas and guidance on getting started.