Every CIO is deeply involved in conversations about artificial intelligence. Now that the technology has reached the level of boardroom discussions, CEOs are planning their AI-powered futures. In a recent PwC survey, an overwhelming majority of chief executives expect AI to have an impact on every function we asked about – from operations, customer service and marketing, to strategy, HR, and risk.
[ See our related article: The state of AI: 10 eye-opening statistics. ]
It falls to CIOs to bring those plans to life. Some are building their own AI capabilities, training machines to segment customers in more targeted ways, or using natural language processing to add empathy to customer interactions. Others are relying on their technology vendors to provide deeper AI capabilities, building on larger data sets to power smarter ERP or CRM systems. Most CIOs are considering both approaches, depending on how close a process or function sits to the company’s core business.
Regardless of the direction, there’s one capability for which a lot of CIOs I speak to still don’t have a robust plan. As we outline in our 2018 AI Predictions report, opening AI’s black box will become a priority for all companies, not just technology giants.
Artificial intelligence can learn many things from data. There’s a risk that it could draw the wrong conclusions, however. That means the only way to be confident about those conclusions is to make sure we can explain the AI’s reasoning. Indeed, standard setters and regulators look for that “explainability” in many instances.
The truth is that AI can make probabilistic determinations in non-obvious ways. It will still fall to humans to understand why. If AI-powered software turns down a mortgage application, the bank has to be able to explain why. If an AI application advises a human relations department not to pursue a certain candidate, that recommendation has to be in compliance with anti-discrimination laws. And what happens if an AI trader makes a leveraged bet on the stock market?
Leaders, employees, consumers, and stakeholders will likely be wary of relying on AI that acts inexplicably. Thus, pressure is growing to open up black boxes and make AI explainable, transparent, and provable. That’s a conundrum that the CIO shouldn’t try to tackle alone.
Where the CFO and audit experts can help
Fortunately, the expertise already exists — in your chief financial officer (CFO) and chief audit executive (CAE). Auditors spend their days evaluating the effectiveness of governance, risk, and control processes. Artificial intelligence and other emerging technologies like blockchain are a new frontier where you can collaborate.
We’ve identified six categories of AI risk that organizations need to think through: performance, security, control, ethical, societal, and economic.
The need for governance and control mechanisms as part of explainable AI is a big part of the first category: performance. Internal auditors can be the humans in the loop, working with innovators to implement risk frameworks. They’ll need to codify how data might be “de-biased” or to teach learning algorithms to steer clear of legal and ethical landmines. Bias can creep into virtually any kind of segmentation, so a risk framework is essential any time an AI is working with customer or employee data. Prosecutors and judges have run into this problem in sentencing, for example, when they algorithmically assessed a defendant’s likelihood of recidivism using an AI called COMPAS. COMPAS turned out to have encoded racial bias in its probabilities.
Security risks are self-evident. Cyber-intrusion of AI systems can have catastrophic consequences. Data could be compromised, corrupting machine learning. Just as dangerous would be the remote takeover of autonomous vehicles or drones. And for that last category, economic risks, finance professionals can consider economic implications and positive and negative costs of AI, including on the workforce.
CIOs have another good reason to work closely with the CFO and CAE: In many businesses, it’s these functions that will help drive and shape AI and automation strategy. Using machine learning, for example, to automatically match purchase orders and invoices or to test a full set of transactions (instead of a sample), requires functional specialists with domain knowledge to work alongside data scientists – to develop, train, and monitor machine learning models.
In short, finance and audit leaders can be a CIO’s secret weapon in realizing AI’s potential. That’s why I believe smart CIOs will get to know their executive peers very well in the coming months and years.
[ Leaders, do you want to give your team a greater sense of urgency? Get our free resource: Fast Start Guide: Creating a sense of urgency, with John Kotter. ]