What is explainable AI?

What is explainable AI?

Explainable AI means humans can understand the path an IT system took to make a decision. Let’s break down this concept in plain English – and explore why it matters so much

64 readers like this
what is explainable AI

Explainable AI can help with AI bias and auditing 

Explainable AI will be increasingly important for in other areas where trust and transparency matter, such as any scenario where AI bias may have a harmful impact on people.

“In many industries, this transparency can be a legal, fiscal, medical, or ethical obligation.”

“While it can be cumbersome to be tasked with returning explanations, it’s a worthwhile endeavor that can often reveal biases built into the models,” says Maturo of SPR. “In many industries, this transparency can be a legal, fiscal, medical, or ethical obligation. Wherever possible, the less a model appears to be magic, the more it will be adopted by its users.”

[ How can you guard against AI bias? Read also AI bias: 9 questions for IT leaders to ask. ]

Explainable AI is also important to accountability and auditability, which will (or at least should) still reside with an organization’s people rather than its technologies.

“At the end of the day, you will be responsible for the decision. Just doing what the algorithm recommended is not a very convincing defense,” says Moshe Kranc, CTO of Ness Digital Engineering. Kranc also notes that explainable AI is crucial to identifying inaccurate outcomes that come from issues such as biased or improperly tuned training data and other issues. Being able to trace the path an AI system took to arrive at a bad outcome helps people fix the underlying problems and prevent them from recurring.

There will always be the possible case where the AI model is wrong.

“AI is not perfect. And although AI predictions can be very accurate, there will always be the [possible] case where the model is wrong,” says Ji Li, data science director at CLARA analytics. “With explainability, the AI technology assists human beings in making quick, fact-based decisions but allows humans the capability to still use their judgment. With explainable AI, AI becomes a more useful technology because instead of always trusting or never trusting the predictions, humans are helping to improve the predictions every day.”

Indeed, explainable AI is ultimately about making AI more valuable in business contexts and in our everyday lives – while also preventing undesirable outcomes.

“Explainable AI is important to business because it gives us new ways to solve problems, appropriately scale processes, and minimize the opportunity for human error. That improved visibility helps increase understanding and improves the customer experience,” says Collins, the SAS CIO.

Collins notes that this is particularly important in regulated businesses like healthcare and banking, which will ultimately need to be able to show how an AI system arrived at a particular decision or outcome. But even in industries that won’t need to be able to audit their AI as a matter of regulatory compliance, the trust and transparency at the heart of explainable AI are worthwhile. They also make good business sense.

“We say that AI augments the human experience. In the case of explainable AI, humans augment the technology’s knowledge and experience to adjust and strengthen analytic models for future use,” Collins says. “Human knowledge and experience help the technology learn and vice versa. It’s a continual feedback loop that can become a dynamic asset for a business.”

[ Get real-world lessons learned from CIOs in the new HBR Analytic Services report, An Executive's Guide to Real-World AI. ]


7 New CIO Rules of Road

CIOs: We welcome you to join the conversation

Related Topics

Submitted By Stephanie Overby
October 16, 2019

You may think everyone knows what big data is by now, but misconceptions remain. Get expert advice for discussing big data in plain terms with colleagues, customers, or any audience.

Submitted By Abbas Faiq
October 16, 2019

IT chief Abbas Faiq shares DevOps lessons learned, from change management to training, on PTC's road to faster software releases

Submitted By Carla Rudder
October 15, 2019

Leaders know that every person on a team has different motivators and pain points. Learn how to work with - and bring out the best in - everyone on your team with these books.


Email Capture

Keep up with the latest thoughts, strategies, and insights from CIOs & IT leaders.