Do you have some anxiety about Artificial Intelligence (AI) bias or related issues? You’re not alone. Nearly all business leaders surveyed for Deloitte’s third State of AI in the Enterprise report expressed concerns around the ethical risks of their AI initiatives.
There is certainly some cause for uneasiness. Nine out of ten respondents to a late 2020 Capgemini Research Institute survey were aware of at least one instance where an AI system had resulted in ethical issues for their businesses. Nearly two-thirds have experienced the issue of discriminatory bias with AI systems, six out of ten indicated their organizations had attracted legal scrutiny as a result of AI applications, and 22 percent have said they suffered customer backlash because of these decisions reached by AI systems.
As Capgemini leaders pointed out in their recent blog post: “Enterprises exploring the potential of AI need to ensure they apply AI the right way and for the right purposes. They need to master Ethical AI.”
[ Check out our primer on 10 key artificial intelligence terms for IT and business leaders: Cheat sheet: AI glossary. ]
7 Artificial Intelligence ethics questions leaders often hear
While organizations aggressively pursue increased AI capabilities, they will look to IT and data science leaders to explain the risks and best practices around ethical and trusted AI. “In a future where AI is ubiquitous, adopters should be creative, become smarter AI consumers, and establish themselves as trustworthy guardians of customer data in order to remain relevant and stay ahead of the competition,” says Paul Silverglate, vice chair and U.S. technology sector leader for Deloitte.
Here, AI experts address some common questions about ethical AI. You may hear these from colleagues, customers, and others. Consider them in the context of your organization:
1. Isn't AI itself inherently ethical and unbiased?
It may seem that technology is neutral, but that is not exactly the case. AI is only as equitable as the humans that create it and the data that feeds it. “Machine learning that supports automation and AI technologies is not created by neutral parties, but instead by humans with bias,” explains Siobhan Hanna, managing director of AI data solutions for digital customer experience services provider Telus International.
“We might never be able to eliminate bias, but we can understand bias and limit the impact it has on AI-enabled technologies. This will be important as the cutting-edge, AI-supported technology of today can and will become outdated rapidly.”
2. What is ethical AI?
While AI or algorithmic bias is one concern that the ethical use of AI aims to mitigate, it is not the only one. Ethical AI considers the full impact of AI usage on all stakeholders, from customers and suppliers to employees and society as a whole. Ethical AI seeks to prevent or root out potentially “bad, biased, and unethical” uses of AI. “Artificial intelligence has limitless potential to positively impact our lives, and while companies might have different approaches, the process of building AI solutions should always be people-centered,” says Telus International’s Hanna.
“Responsible AI considers the technology’s impact not only on users but on the broader world, ensuring that its usage is fair and responsible,” Hanna explains. “This includes employing diverse AI teams to mitigate biases, ensure appropriate representation of all users, and publicly state privacy and security measures around data usage and personal information collection and storage.”
3. How big a concern is ethical AI?
It’s top of mind from board rooms (where C-suite leaders are becoming aware of risks like biased AI) to break rooms (where employees worry about the impact of intelligent automation on jobs).
“As organizations become more invested in AI, it is imperative that they have a common framework, principles, and practices for the board, C-suite, enterprise and third-party ecosystem to proactively manage AI risks and build trust with both their business and customers,” says Irfan Saif, principal and AI co-leader, Deloitte & Touche.
4. How does diversity (or lack thereof) impact ethical AI?
A culturally diverse team is a powerful way to detect and eliminate baking conscious and unconscious biases into AI, Hanna says.
“By tapping into the strength of this diversity, your brand might be better positioned to think and behave differently about trust, safety, and ethics and then transfer that knowledge and experience into AI solutions,” says Hanna, who recommends a “human-in-the-loop” approach. This ensures that algorithms programmed by humans with inherent blind spots and biases are reviewed and corrected in the early phases of development or deployment.
5. What else are leading companies doing to address the ethical risks of AI?
At a tactical level, avoiding data-induced AI bias is critical. CompTIA advises ensuring balanced label representation in training data, as well as making sure the purpose and goals of the AI models are clear enough so that proper test datasets can be created to test the models for biases.
Open source tool kits from organizations such as Aequitas can help measure bias in uploaded data sets. Themis–ml, an open source machine learning library, aims to reduce data bias using bias-mitigation algorithms. There are also tools available to identify flawed algorithms, like IBM’s AI Fairness 360, an extensible, open source toolkit which combines a number of bias-mitigating algorithms to help teams detect problems in machine-learning models.
Some organizations are creating ethical guidelines for AI development as well as clear processes for informing users about the use of their data in AI applications. Nearly half (45 percent) of organizations had defined an ethical charter to provide guidelines on AI development in 2021, according to the Capgemini Research Institute survey – a dramatic increase from just 5 percent in 2019. However, the 59 percent of organizations last year that informed users about the ways in which AI decisions might affect them was a drop from the 73 percent that did so the year prior.
6. Will AI steal my job?
Myth-busting can also be a meaningful aspect of ethical AI. “There are a number of misunderstandings that business leaders should be aware of,” says Hanna of Telus International. “For instance, AI will not replace the human workforce, but support it. While the technology has proven to be helpful across a number of industries, including outperforming humans in diagnosing cancer or reviewing legal documents, a cataclysmic impact on human jobs is not in our future.”
Business leaders should focus their efforts not on automating as many employees as possible out of their roles, but rather on what new work those employees may be freed up to do. What business opportunities does that automation open up?
For more advice on managing job loss fears, read Adobe CIO Cynthia Stoddard's article: "How IT automation became a team eye-opener." As Stoddard puts it, "When you mention the word “automation,” everyone’s visceral reaction is to wonder what will happen to their job. But once our team saw the outcomes – that they could participate in the future of thought, experiment with new technologies, and focus on honing new skills – IT automation became a real eye-opener."
7. Is ethical AI IT's job?
Ethical AI is important to everyone, but IT leaders can play a powerful role. “AI greatly impacts our lives and is being integrated into every single industry. Trust is everything in the digital era, and enterprise IT leaders should be educated on what constitutes trusted and ethical AI to lead their organizations to establish the correct guidelines and frameworks into their programs,” Hanna says. “While industry standards in this area are still maturing, there is widespread recognition that product architecture and development should be based on appropriate ethics.”