You may think everyone knows what big data is by now, but misconceptions remain. Get expert advice for discussing big data in plain terms with colleagues, customers, or any audience.
What is explainable AI?
Explainable AI means humans can understand the path an IT system took to make a decision. Let’s break down this concept in plain English – and explore why it matters so much
Explainable AI can help with AI bias and auditing
Explainable AI will be increasingly important for in other areas where trust and transparency matter, such as any scenario where AI bias may have a harmful impact on people.
“While it can be cumbersome to be tasked with returning explanations, it’s a worthwhile endeavor that can often reveal biases built into the models,” says Maturo of SPR. “In many industries, this transparency can be a legal, fiscal, medical, or ethical obligation. Wherever possible, the less a model appears to be magic, the more it will be adopted by its users.”
[ How can you guard against AI bias? Read also AI bias: 9 questions for IT leaders to ask. ]
Explainable AI is also important to accountability and auditability, which will (or at least should) still reside with an organization’s people rather than its technologies.
“At the end of the day, you will be responsible for the decision. Just doing what the algorithm recommended is not a very convincing defense,” says Moshe Kranc, CTO of Ness Digital Engineering. Kranc also notes that explainable AI is crucial to identifying inaccurate outcomes that come from issues such as biased or improperly tuned training data and other issues. Being able to trace the path an AI system took to arrive at a bad outcome helps people fix the underlying problems and prevent them from recurring.
“AI is not perfect. And although AI predictions can be very accurate, there will always be the [possible] case where the model is wrong,” says Ji Li, data science director at CLARA analytics. “With explainability, the AI technology assists human beings in making quick, fact-based decisions but allows humans the capability to still use their judgment. With explainable AI, AI becomes a more useful technology because instead of always trusting or never trusting the predictions, humans are helping to improve the predictions every day.”
Indeed, explainable AI is ultimately about making AI more valuable in business contexts and in our everyday lives – while also preventing undesirable outcomes.
“Explainable AI is important to business because it gives us new ways to solve problems, appropriately scale processes, and minimize the opportunity for human error. That improved visibility helps increase understanding and improves the customer experience,” says Collins, the SAS CIO.
Collins notes that this is particularly important in regulated businesses like healthcare and banking, which will ultimately need to be able to show how an AI system arrived at a particular decision or outcome. But even in industries that won’t need to be able to audit their AI as a matter of regulatory compliance, the trust and transparency at the heart of explainable AI are worthwhile. They also make good business sense.
“We say that AI augments the human experience. In the case of explainable AI, humans augment the technology’s knowledge and experience to adjust and strengthen analytic models for future use,” Collins says. “Human knowledge and experience help the technology learn and vice versa. It’s a continual feedback loop that can become a dynamic asset for a business.”
[ Get real-world lessons learned from CIOs in the new HBR Analytic Services report, An Executive's Guide to Real-World AI. ]