What is explainable AI?

What is explainable AI?

Explainable AI means humans can understand the path an IT system took to make a decision. Let’s break down this concept in plain English – and explore why it matters so much

up
27 readers like this

on

May 22, 2019
what is explainable AI

Explainable AI can help with AI bias and auditing 

Explainable AI will be increasingly important for in other areas where trust and transparency matter, such as any scenario where AI bias may have a harmful impact on people.

“In many industries, this transparency can be a legal, fiscal, medical, or ethical obligation.”

“While it can be cumbersome to be tasked with returning explanations, it’s a worthwhile endeavor that can often reveal biases built into the models,” says Maturo of SPR. “In many industries, this transparency can be a legal, fiscal, medical, or ethical obligation. Wherever possible, the less a model appears to be magic, the more it will be adopted by its users.”

[ How can you guard against AI bias? Read also AI bias: 9 questions for IT leaders to ask. ]

Explainable AI is also important to accountability and auditability, which will (or at least should) still reside with an organization’s people rather than its technologies.

“At the end of the day, you will be responsible for the decision. Just doing what the algorithm recommended is not a very convincing defense,” says Moshe Kranc, CTO of Ness Digital Engineering. Kranc also notes that explainable AI is crucial to identifying inaccurate outcomes that come from issues such as biased or improperly tuned training data and other issues. Being able to trace the path an AI system took to arrive at a bad outcome helps people fix the underlying problems and prevent them from recurring.

There will always be the possible case where the AI model is wrong.

“AI is not perfect. And although AI predictions can be very accurate, there will always be the [possible] case where the model is wrong,” says Ji Li, data science director at CLARA analytics. “With explainability, the AI technology assists human beings in making quick, fact-based decisions but allows humans the capability to still use their judgment. With explainable AI, AI becomes a more useful technology because instead of always trusting or never trusting the predictions, humans are helping to improve the predictions every day.”

Indeed, explainable AI is ultimately about making AI more valuable in business contexts and in our everyday lives – while also preventing undesirable outcomes.

“Explainable AI is important to business because it gives us new ways to solve problems, appropriately scale processes, and minimize the opportunity for human error. That improved visibility helps increase understanding and improves the customer experience,” says Collins, the SAS CIO.

Collins notes that this is particularly important in regulated businesses like healthcare and banking, which will ultimately need to be able to show how an AI system arrived at a particular decision or outcome. But even in industries that won’t need to be able to audit their AI as a matter of regulatory compliance, the trust and transparency at the heart of explainable AI are worthwhile. They also make good business sense.

“We say that AI augments the human experience. In the case of explainable AI, humans augment the technology’s knowledge and experience to adjust and strengthen analytic models for future use,” Collins says. “Human knowledge and experience help the technology learn and vice versa. It’s a continual feedback loop that can become a dynamic asset for a business.”

[ Get real-world lessons learned from CIOs in the new HBR Analytic Services report, An Executive's Guide to Real-World AI. ]

Pages

7 New CIO Rules of Road

CIOs: We welcome you to join the conversation

Related Topics

Submitted By Carla Rudder
July 18, 2019

The new boss is how old? Reporting to a younger boss can bring up tough emotions – from jealousy to self-doubt. Here's how to use your EQ to work past that and build a strong relationship.

Submitted By Ganes Kesari
July 18, 2019

What’s holding back adoption of artificial intelligence in your organization? Let’s look at a real-life example of how a pharma company overcame 3 common obstacles.

Submitted By Kevin Casey
July 17, 2019

When you say “RPA,” people may hear “job loss.” How to allay common fears and make your case for the benefits of robotic process automation.

x

Email Capture

Keep up with the latest thoughts, strategies, and insights from CIOs & IT leaders.