The state of Artificial Intelligence (AI) ethics: 14 interesting statistics

Now's the time for IT leaders to evaluate artificial intelligence systems - some deployed rapidly during the pandemic – for any unethical outcomes or AI biases. Consider what recent data says about AI ethics
270 readers like this.

In the wake of the COVID-19 pandemic, there has been a rapid increase in the deployment of artificial intelligence (AI) systems due to an increased need for automation, advanced analytics, and remote work. In fact, machine learning and other forms of AI are being applied to address the increasing scale of the pandemic itself. Now, say experts, is a good time to step back and consider the ethics of these (and all) AI applications.

“These past few months have been especially challenging, and the deployment of technology in ways hitherto untested at an unrivaled pace has left the internet and technology watchers aghast,” Abhishek Gupta, founder of the Montreal AI Ethics Institute, said in the introduction to that organization’s inaugural State of AI Ethics report this June. “It has never been more important that we keep a sharp eye out on the development of this field and how it is shaping our society and interactions with each other.”

[ Do you understand the main types of AI? Read also: 5 artificial intelligence (AI) types, defined. ]

The CapGemini Research Institute looked at many of the ethical issues emerging at the enterprise level in its recent report, AI and the Ethical Conundrum: How organizations can build ethically robust AI systems and gain trust. “Now is a good time to talk about the ethics of these AI systems, because when those systems were initially implemented, they may not have been done with ethical considerations in mind,” Dan Simion, vice president of AI & Analytics for Capgemini North America, says of some of the most recent AI-enabled implementations. “Rather, the priority was to get them up and running from a pure productivity perspective. IT leaders need to go back and evaluate their AI systems that were deployed into production and evaluate if they are showing any unethical outcomes or AI biases.”

Looking forward, Simion says, CIOs can play a more pivotal role in making sure that ethical considerations are part of the conversation from the start. “In the future, it’s up to IT leaders to ensure that ethics are a part of the conversation from the beginning when deploying these AI systems – even when deployments are done quickly.” 

AI ethics, by the numbers - and implications for IT

We talked to Simion about a few of the most telling stats from his organization’s AI ethics research and what they mean for IT leaders:

Nine out of 10: This overwhelming majority of organizations said they were aware of at least one instance where an AI system had resulted in ethical issues for their business.

Not surprising, says Simion. “At the end of the day, the AI systems are relatively new. However, when companies see these unethical outcomes, they need to make sure their systems are more rigorously tested against those outcomes, including discriminatory bias.”

65 percent: Nearly two-thirds of executives said they are aware of the issue of discriminatory bias with AI systems, up from 35 percent last year. Leaders who are aware of these issues need to actively work to fix them to prevent these unethical outcomes from occurring with their customers in the future. “The awareness gives the organization visibility that there is a problem,” Simion says, “but it’s up to the organization to correct the problem.”

At the same time, more than three-quarters (78 percent) said they were aware of explainability, up from 32 percent last year. And 69 percent said they understood the issues of transparency in AI engagements, up from 36 percent in 2019.

Two-thirds: That’s the proportion of customers who say that they expect the AI models of organizations to be fair and free from prejudice and bias against them or any other person or group. An even higher number (71 percent) expect the AI system to be able to clearly explain results to them, and 67 percent said that they expect organizations to take ownership of their AI algorithms when they go wrong.

80 percent: The single-year jump in the number of organizations that have defined an ethical charter to provide guidelines on AI development. Almost half (45 percent) of organizations have one today, up from just five percent in 2019.

Simion attributes the sizable leap to a combination of factors. “First, the situation created by the pandemic resulted in a greater need for AI and more organizations utilizing these capabilities,” he says. “Second, now that consumers are interacting more with AI to avoid person-to-person contact, they are demanding that these interactions be ethical in nature.”

59 percent: That’s the number of organizations that informed users about the ways in which AI decisions might affect them this year. However, that’s about a 20 percent decrease from the nearly three-quarters (73 percent) of organizations that said they did so last year.

The reason? Likely, other priorities have superseded these efforts given the “significant changes and transformations that needed to take place to keep businesses operational when the pandemic began this year,” says Simion, who doesn’t see a cause for concern yet. “These initiatives to build AI systems that will inform users have just been pushed back as a result of the current circumstances.”

Six out of 10: That’s the number of organizations that have attracted legal scrutiny because of these decisions reached by AI systems. Another 22 percent say they have faced customer backlash as a result of AI-enabled actions or decisions.

As regulators begin to analyze the ethics of AI systems more closely, Simion expects the legal issues to increase in the days ahead. Conversely, he predicts customer backlash is prone to decrease given the fact that companies are going to continue to improve their systems and their transparency with the outcomes of AI. “As companies work to create systems that remove and avoid unethical AI bias,” he explains, “they will create more positive customer experiences.”

45 percent: That’s the number of consumers who, after having a negative AI experience, would tell family and friends and urge them not to engage with the organization. Nearly four out of ten (39 percent) would switch from the AI channel to a higher-cost human channel, and just over a quarter (27 percent) said they would stop dealing with the organization or trust it less.

[ How can automation free up staff time for innovation? Get the free eBook: Managing IT with Automation. ] 

Stephanie Overby is an award-winning reporter and editor with more than twenty years of professional journalism experience. For the last decade, her work has focused on the intersection of business and technology. She lives in Boston, Mass.