What is explainable AI?

What is explainable AI?

Explainable AI means humans can understand the path an IT system took to make a decision. Let’s break down this concept in plain English – and explore why it matters so much

20 readers like this


May 22, 2019
what is explainable AI

Artificial intelligence doesn’t need any extra fuel for the myths and misconceptions that surround it. Consider the phrase “black box” – its connotations are equal parts mysterious and ominous, the stuff of “The X Files” more than the day-to-day business of IT.

Yet it’s true that AI systems, such as machine learning or deep learning, take inputs and then produce outputs (or make decisions) with no decipherable explanation or context. The system makes a decision or takes some action, and we don’t necessarily know why or how it arrived at that outcome. The system just does it. That’s the black box model of AI, and it’s indeed mysterious. In some use cases, that’s just fine. In other contexts, it’s plenty ominous.

“For use cases with a big human impact...being able to understand the decision-making process is mission-critical.”

“For small things like AI-powered chatbots or sentiment analysis of social feeds, it doesn’t really matter if the AI system operates in a black box,” says Stephen Blum, CTO and co-founder of PubNub. “But for use cases with a big human impact – autonomous vehicles, aerial navigation and drones, military applications – being able to understand the decision-making process is mission-critical. As we rely more and more on AI in our everyday lives, we need to be able to understand its ‘thought process’ and make changes and improvements over time.”

Enter explainable AI – sometimes known by the acronym XAI or similar terms such as interpretable AI. As the name suggests, it’s AI that can be explained and understood by humans, though that’s a somewhat reductive way of, um, explaining explainable AI.

Here’s a more robust definition from our recent HBR Analytic Services research report, “An Executive’s Guide to Real-World AI”: “Machine learning techniques that make it possible for human users to understand, appropriately trust, and effectively manage AI. Various organizations, including the Defense Advanced Research Projects Agency, or DARPA, are working on this.”

[ Which of your organization's problems could AI solve? Get real-world lessons learned from CIOs in the new HBR Analytic Services report, An Executive's Guide to Real-World AI. ]

The word “trust” is critical, but let’s get back to that a little later. We asked Blum and other AI experts to share explainable AI definitions – and explain why this concept will be critical for organizations working with AI in fields ranging from financial services to medicine. This background can bolster your own understanding as well as your team’s, and help you help others in your organization understand explainable AI and its importance. Let’s start with the definitions.

Explainable AI defined in plain English

“The term ‘explainable AI’ or ‘interpretable AI’ refers to humans being able to easily comprehend through dynamically generated graphs or textual descriptions the path artificial intelligence technology took to make a decision.” –Keith Collins, executive vice president and CIO, SAS

“Explainable AI can be equated to ‘showing your work’ in a math problem.”

“Explainable AI can be equated to ‘showing your work’ in a math problem. All AI decision-making processes and machine learning don’t take place in a black box – it’s a transparent service, built with the ability to be dissected and understood by human practitioners. To add ‘explanation’ to the output, adding input/output mapping is key.” –Stephen Blum, CTO and co-founder of PubNub

“Explainable AI is where we can interpret the outcomes of AI while being able to clearly traverse back, from outcomes to the inputs, on the path the AI took to arrive at the results.” –Phani Nagarjuna, chief analytics officer, Sutherland

“Explainable AI is a machine learning or artificial intelligence application that is accompanied by easily understandable reasoning for how it arrived at a given conclusion. Whether by preemptive design or retrospective analysis, new techniques are being employed to make the black box of AI less opaque.” –Andrew Maturo, data analyst, SPR

“Explainable AI in simple terms means AI that is transparent in its operations so that human users will be able to understand and trust decisions. Organizations must ask the question – can you explain how your AI generated that specific insight or decision?” –Matt Sanchez, founder and CTO of CognitiveScale

Why explainable AI matters

Sanchez’s question begets another: Why does it matter? The reasons are myriad and with potentially enormous implications for people, business, governments, and society. Let’s again consider the term “trust.”

Heena Purohit, senior product manager at IBM Watson IoT, notes that AI – which IBM refers to as “augmented intelligence” – and machine learning already do a great job of processing vast amounts of data in an often complex fashion. But the goal of AI and ML, Purohit says, is to help people be more productive and to make smarter, faster decisions – which is much harder if people have no idea why they’re making those decisions.

Explainable AI is, in a sense, about getting people to trust and buy into these new systems and how they’re changing the way we work.

“As the purpose of the AI is to help humans make enhanced decisions, the business realizes the true value of the AI solution when the user changes his behavior or takes action based on the AI output [or] prediction,” Purohit says. “However, in order to get a user to change his behavior, he will have to trust the system’s suggestions. This trust is built when users can feel empowered and know how the AI system came up with the recommendation [or] output.”

From an organizational leadership standpoint, explainable AI is, in a sense, about getting people to trust and buy into these new systems and how they’re changing the way we work.

“Having seen the ‘AI black box’ problem persist in initial days, I now ensure that our AI solutions are explainable,” Purohit adds. “A question I ask myself when designing the AI products to ensure the AI is explainable is: Does your AI make it easy for humans to easily perceive, detect, and understand its decision process?”


Kevin Casey writes about technology and business for a variety of publications. He won an Azbee Award, given by the American Society of Business Publication Editors, for his InformationWeek.com story, "Are You Too Old For IT?" He's a former community choice honoree in the Small Business Influencer Awards.

7 New CIO Rules of Road

CIOs: We welcome you to join the conversation

Related Topics

Submitted By Stephanie Overby
June 25, 2019

These two age groups will be an important part of IT teams for the foreseeable future. New Mercer data shows how to recruit and retain both.

Submitted By Robert Reeves
June 25, 2019

When organizations prioritize software release speed, velocity and quality increase. When quality increases, so does safety. 

Submitted By Kevin Casey
June 24, 2019

As people continue to scale and automate containers and microservices, you’ll hear a lot about service mesh. What exactly are the benefits? Here’s how to break it down, even to non-techies.


Email Capture

Keep up with the latest thoughts, strategies, and insights from CIOs & IT leaders.