Explainable AI: 4 industries where it will be critical

Explainable AI: 4 industries where it will be critical

Explainable AI – which lets humans understand and articulate how an AI system made a decision – will be key in healthcare, manufacturing, insurance, and automobiles. What will it mean for your organization? 

up
22 readers like this

on

May 29, 2019
what is explainable AI

3. Insurance

Insurance as an industry is one, like healthcare, where AI may have far-reaching impacts, but where trust, transparency, and auditability are absolute musts.

“There are a number of potential use cases for AI in insurance, such as customer acquisition, agent productivity, claims prevention, underwriting, customer service, cross-selling, policy adjustment, and improving risk and compliance,” says Matt Sanchez, founder and CTO at CognitiveScale. Sanchez points to a recent Accenture survey that found most insurance executives expect AI to revolutionize their industry in the next three years.

But this is definitely an area with considerable impacts. Just think of key insurance categories to get a feel for those impacts – life, homeowner’s, health, workers’ compensation, and so forth. Explainable AI will matter a great deal; Sanchez suggests thinking about these questions, each of which is also applicable in other sectors:

  • Can the AI explain how it got to this insight or result?
  • What data, models, and processing have been applied to get this result?
  • Can my regulators access and understand how this AI works?
  • Who accessed what and when?

4. Autonomous vehicles

“How the AI system is built and how it interacts with the vehicle is high stakes. It could mean life or death.”

Explainable AI should ultimately be about making that AI as valuable as possible. In some cases, the inherent value of AI operating optimally is about as basic as it gets.

“Understanding why your AI service made a certain decision or how it derived a certain insight is key for AI practitioners to better integrate AI services,” says Stephen Blum, CTO and co-founder of PubNub. “Take autonomous vehicles. How the AI system is built and how it interacts with the vehicle is high stakes. It could mean life or death.”

Indeed, the emerging arena of self-driving vehicles is one where AI will certainly play a role – and where explainable AI will be paramount.

Appleby, the Kinetica CEO, puts the importance of explainability in this context in terms virtually anyone can understand: “If a self-driving car finds itself in a position where an accident is inevitable, what measures should it take? Prioritize the protection of the driver and put pedestrians in grave danger? Avoid pedestrians while putting the passengers’ safety at risk?”

Needless to say, these aren’t questions with easy answers. But here’s a fairly straightforward conclusion: The black box model of AI does not work in this context. Explainability will be a must, whether you’re the passenger or the pedestrian, not to mention the automaker, the public safety official, and so forth.

“We may disagree about the appropriate vehicle response, but we should be aware in advance of the moral priorities it is programmed to follow,” Appleby says. “With established data governance within the company, automakers can trace the data set by tracking and explaining how the model went from decision point A to point Z, therefore making it easier for them to assess whether or not these outcomes map to the ethical stance they intend to take as a company. Passengers can likewise determine if they are comfortable traveling in a vehicle that is designed to make certain decisions.”

This may be a particularly grim (albeit realistic) scenario, but again, there’s an underlying principle that translates widely, including into scenarios that are not urgent matters of life and death. Explainable AI is about improvement and optimization, and that’s another way for IT leaders to think about it going forward.

“If the AI system makes a mistake, builders need to understand why it did that so they can improve and fix it,” Blum says. “If their AI service lives and operates in a black box, they have no insight into how to debug and improve it.”

[ Get real-world lessons learned from CIOs in the new HBR Analytic Services report, An Executive's Guide to Real-World AI. ]

Pages

7 New CIO Rules of Road

CIOs: We welcome you to join the conversation

Related Topics

Submitted By Carla Rudder
July 18, 2019

The new boss is how old? Reporting to a younger boss can bring up tough emotions – from jealousy to self-doubt. Here's how to use your EQ to work past that and build a strong relationship.

Submitted By Ganes Kesari
July 18, 2019

What’s holding back adoption of artificial intelligence in your organization? Let’s look at a real-life example of how a pharma company overcame 3 common obstacles.

Submitted By Kevin Casey
July 17, 2019

When you say “RPA,” people may hear “job loss.” How to allay common fears and make your case for the benefits of robotic process automation.

x

Email Capture

Keep up with the latest thoughts, strategies, and insights from CIOs & IT leaders.