Insurance as an industry is one, like healthcare, where AI may have far-reaching impacts, but where trust, transparency, and auditability are absolute musts.
“There are a number of potential use cases for AI in insurance, such as customer acquisition, agent productivity, claims prevention, underwriting, customer service, cross-selling, policy adjustment, and improving risk and compliance,” says Matt Sanchez, founder and CTO at CognitiveScale. Sanchez points to a recent Accenture survey that found most insurance executives expect AI to revolutionize their industry in the next three years.
But this is definitely an area with considerable impacts. Just think of key insurance categories to get a feel for those impacts – life, homeowner’s, health, workers’ compensation, and so forth. Explainable AI will matter a great deal; Sanchez suggests thinking about these questions, each of which is also applicable in other sectors:
- Can the AI explain how it got to this insight or result?
- What data, models, and processing have been applied to get this result?
- Can my regulators access and understand how this AI works?
- Who accessed what and when?
4. Autonomous vehicles
Explainable AI should ultimately be about making that AI as valuable as possible. In some cases, the inherent value of AI operating optimally is about as basic as it gets.
“Understanding why your AI service made a certain decision or how it derived a certain insight is key for AI practitioners to better integrate AI services,” says Stephen Blum, CTO and co-founder of PubNub. “Take autonomous vehicles. How the AI system is built and how it interacts with the vehicle is high stakes. It could mean life or death.”
Indeed, the emerging arena of self-driving vehicles is one where AI will certainly play a role – and where explainable AI will be paramount.
Appleby, the Kinetica CEO, puts the importance of explainability in this context in terms virtually anyone can understand: “If a self-driving car finds itself in a position where an accident is inevitable, what measures should it take? Prioritize the protection of the driver and put pedestrians in grave danger? Avoid pedestrians while putting the passengers’ safety at risk?”
Needless to say, these aren’t questions with easy answers. But here’s a fairly straightforward conclusion: The black box model of AI does not work in this context. Explainability will be a must, whether you’re the passenger or the pedestrian, not to mention the automaker, the public safety official, and so forth.
“We may disagree about the appropriate vehicle response, but we should be aware in advance of the moral priorities it is programmed to follow,” Appleby says. “With established data governance within the company, automakers can trace the data set by tracking and explaining how the model went from decision point A to point Z, therefore making it easier for them to assess whether or not these outcomes map to the ethical stance they intend to take as a company. Passengers can likewise determine if they are comfortable traveling in a vehicle that is designed to make certain decisions.”
This may be a particularly grim (albeit realistic) scenario, but again, there’s an underlying principle that translates widely, including into scenarios that are not urgent matters of life and death. Explainable AI is about improvement and optimization, and that’s another way for IT leaders to think about it going forward.
“If the AI system makes a mistake, builders need to understand why it did that so they can improve and fix it,” Blum says. “If their AI service lives and operates in a black box, they have no insight into how to debug and improve it.”
[ Get real-world lessons learned from CIOs in the new HBR Analytic Services report, An Executive's Guide to Real-World AI. ]
Subscribe to our weekly newsletter.
Keep up with the latest advice and insights from CIOs and IT leaders.