Explainable AI: 4 industries where it will be critical

Explainable AI – which lets humans understand and articulate how an AI system made a decision – will be key in healthcare, manufacturing, insurance, and automobiles. What will it mean for your organization? 
619 readers like this.
what is explainable AI

Let’s say that I find it curious how Spotify recommended a Justin Bieber song to me, a 40-something non-Belieber. That doesn’t necessarily mean that Spotify’s engineers must ensure that their algorithms are transparent and comprehensible to me; I might find the recommendation a tad off-target, but the consequences are decidedly minimal.

This is a fundamental litmus test for explainable AI – that is, machine learning algorithms and other artificial intelligence systems that produce outcomes that humans can readily understand and track backwards to the origins. The greater the potential consequences of AI-based outcomes, the greater the need for explainable AI. Conversely, relatively low-stakes AI systems might be just fine with the black box model, where we don’t understand (and can’t readily figure out) the results.

“If algorithm results are low-impact enough, like the songs recommended by a music service, society probably doesn’t need regulators plumbing the depths of how those recommendations are made,” says Dave Costenaro, head of artificial intelligence R&D at Jane.ai.

I can live with an app’s misunderstanding of my musical tastes. The stakes are different with a recommended medical treatment or a rejected mortgage application.

I can live with an app’s misunderstanding of my musical tastes. I may not be able to live with much more important decisions borne out of AI systems, perhaps literally in the case of a recommended medical treatment or a rejected application for a mortgage loan.

These are high-stakes scenarios and, particularly in the event of a negative outcome, I may need (and possibly be entitled to) clear explanations of how we arrived at a particular result. In many cases, so will auditors, lawyers, government agencies, and other potential parties.

[ Which of your organization's problems could AI solve? Get real-world lessons learned from CIOs in An Executive’s Guide to Real-World AI. ]

A related litmus test: As the responsibility for a particular decision or result shifts away from humans to machines, the need for explainability also rises.

“If an algorithm already has humans in the loop, the human decision-makers can continue to bear the responsibility of explaining the outcomes as previously done,” Costenaro says. He gives as an example a computer-vision system that pre-labels x-ray images for a radiologist. “This can help the radiologist work more accurately and efficiently, but ultimately, he or she will still provide the diagnosis and explanations.”

[ Will your organization thrive in 2021? Learn the four priorities top CIOs are focusing on now. Download the HBR Analytic Services report: IT Leadership in the Next Normal. ]

New AI responsibility for IT: Explaining the why 

As AI matures, however, we’re likely to see the growth of new applications that decreasingly rely on human decision-making and responsibility. Music recommendation engines might not have a particularly high burden of responsibility, but many other real or potential use cases will.

“For a new class of AI decisions that are high-impact and that humans can no longer effectively participate in, either due to speed or volume of processing required, practitioners are scrambling to develop ways to explain the algorithms,” Costenaro says.

IT leaders will need to take the reins to ensure their organization’s AI use cases properly incorporate explainability when necessary. This issue is already on many CIOs’ radars, says Gaurav Deshpande, VP of marketing at TigerGraph. Deshpande says that even when enterprise CIOs inherently see the value of a particular AI technology or use case, there’s often a “but” in their response.

As in: “‘…but if you can’t explain how you arrived at the answer, I can’t use it,’” Deshpande says. “This is because of the risk of bias in the ‘black box’ AI system that can lead to lawsuits and significant liability and risk to the company brand as well as the balance sheet.”

[ How can you guard against AI bias? Read also AI bias: 9 questions for IT leaders to ask. ]

This is another way of thinking about how and why enterprises will adopt explainable AI systems instead of operating black box models; their businesses may depend on it. To bring back Bieber one final time, if a music app misses the mark with its Bieber recommendation, my claim of AI bias might be misguided. In higher-stakes contexts, a similar claim might be quite serious. That’s why explainable AI is likely to be a focal point in business applications of machine learning, deep learning, and other disciplines.

Explainable AI’s role in four industries: A closer look 

Ask Moshe Kranc, CTO at Ness Digital Engineering, about potential use cases for explainable AI, and the answer is both simple and far-reaching: “Any use case that impacts people’s lives and could be tainted by bias.”

He shares a few examples of decisions likely to be increasingly made by machines but that will fundamentally require trust, auditability, and other characteristics of explainable AI:

  • Admission to a training program or a university
  • Deciding whether to insure someone and for how much
  • Deciding whether to issue someone a credit card or a loan based on demographic data

With this in mind, we asked various AI experts and IT leaders to identify industries and use cases where explainable AI will be a must. (Note: The banking industry is also a good example here, but we’ve mentioned it enough already. Suffice it to say, explainable AI is a good fit where machines make or play a key role in lending decisions and other financial services.) In many cases, these uses are extensible to other industries – the details may vary, but the principles remain the same, so these examples might help your own thinking about explainable AI use cases in your organization.

1. Healthcare

Revisiting our first litmus test, the need for explainable AI rises in sync with the real human impacts. So healthcare is about as good a place to start as any, in part because it’s also an area where AI could be enormously beneficial.

“A machine using explainable AI could save the medical staff a great deal of time, allowing them to focus on the interpretive work of medicine instead of on a repetitive task. They could see more patients, and at the same time give each patient more of their attention,” says Paul Appleby, CEO of Kinetica. “The potential value is great but requires the traceable explanation that explainable AI delivers. Explainable AI allows a machine to assess data and reach a conclusion, but at the same time gives a doctor or nurse the decision lineage data to understand how that conclusion was reached, and therefore, in some cases, come to a different conclusion that requires the nuance of human interpretation.”

Keith Collins, EVP and CIO at SAS, shares a specific real-world application. “We’re presently working on a case where physicians use AI analytics to help detect cancerous lesions more accurately. The technology acts as the physician’s ‘virtual assistant,’ and it explains how each variable in an MRI image, for example, contributes to the technology identifying suspicious areas as probable for cancer while other suspicious areas are not.”

2. Manufacturing

Field technicians often rely on “tribal knowledge” when it comes to diagnosing and fixing equipment failures.

Heena Purohit, senior product manager for IBM Watson IoT, notes that in manufacturing, field technicians often rely on “tribal knowledge” when it comes to diagnosing and fixing equipment failures. (This has corollaries in other industries, too.) The problem with tribal knowledge is that the tribe can change, sometimes significantly: People come and go, and so does their know-how, which doesn’t always get recorded or transferred.

“AI-driven natural-language processing can help analyze unstructured data such as equipment manuals, maintenance standards along with structured data such as historical work orders, IoT sensor readings, and business process data to come up with the best recommendation of prescriptive guidance the technician should follow,” Purohit says.

This doesn’t eliminate the value of tribal knowledge, nor does it cut human decision-making out of the loop. Rather, it’s an iterative and interactive process that helps ensure knowledge is both stored and also shared in actionable fashion.

“In this case, we show the user the multiple potential options of repair guidance recommendations driven from the AI and our percentage confidence interval of each response being the likely answer. The user is given upvote and downvote options on each, which helps with the continuous learning process and improves future recommendations,” Purohit explains. “This way, we don’t give the user [only] one answer; [we] allow him to make an intelligence decision between choices. For each recommendation, as an advanced feature, we also show the user the knowledge graph output with the input used during the AI training phase to help the user understand the parameters on why the result was prioritized and scored accordingly.”