AI bias: 9 questions leaders should ask

AI bias: 9 questions leaders should ask

Artificial intelligence bias can create problems ranging from bad business decisions to injustice. Use these questions to fight off potential biases in your AI systems

up
454 readers like this
How big data works with AI

Management questions to ask about AI bias

If you’re actually worried about those robot overlord scenarios, then it’s probably a good idea to have some controls in place to prevent them from coming true. More seriously, ongoing monitoring, auditing, and optimization of AI applications is absolutely necessary to guard against systemic bias. Like security and other IT concerns, this is an indefinite project.

7. What proportion of resources is appropriate for an organization to devote to assessing potential bias?

“A thoughtful organization will scale the resources dedicated to assessing bias based on the potential impact of any bias and the sensitivity to potential bias of the team charged with developing the AI system,” says Northman, the Tucker Ellis attorney.

This is itself an ongoing project of analysis and optimization.

“As a rule, some of the most important assessment will take place following deployment,” Northman says. “It is critical that an organization study and evaluate the results the AI system produces.”

8. Have you thought deeply about what metrics you use to evaluate your work?

Bottom-line metrics like revenue or traffic are more fertile terrain for AI bias.

AI needs measurement – like any other worthwhile IT endeavor. Glaser from Periscope Data notes that short-term, bottom-line metrics like revenue or traffic are more fertile terrain for AI bias.

“AI systems are less likely to go off the rails if the team is evaluating them using long-term holistic metrics, including pairing metrics so that they balance each other out,” Glaser says.

9. How can we test for bias in training data?

McFarland notes that AI success, as with so many other IT systems, requires ongoing testing, though he acknowledges that teams might encounter challenges here, including costs or population assessment accuracy. But not testing at all is leaving the door wide open for bias.

Frequency bias is a common issue with AI; it’s also one you can test for in your training data and on an ongoing basis.

“Compare counts of different population types in your training data and compare that to the actual distribution of your target population,” McFarland says. “Any skews in the counts can indicate a bias for or against a certain population type.”

Coming full circle, human biases are another common issue. You can test for those, too.

“Have several experts summarize the same document and then compare their results,” McFarland says. “If you have a good summarization framework, these summaries will be mostly interchangeable. Looking for variances across these summaries can help identify biases in the experts.”

[ Want lessons learned from CIOs applying AI? Get the new HBR Analytic Services report, An Executive's Guide to Real-World AI. ]

Pages

IT leadership in the next normal report

7 New CIO Rules of Road

CIOs: We welcome you to join the conversation

Related Topics

Submitted By Linda Kahangi
March 01, 2021

After the upheaval of 2020, are you cultivating employee loyalty? Baking security in from the start? CIOs leading digital transformation in 2021  face new risks and opportunities.

Submitted By Kevin Casey
March 01, 2021

What does edge computing do for your hybrid cloud strategy, and what does edge server architecture look like in action? Let's look at the latency, consistency, security, and cost issues with experts - as well as some edge use cases

Submitted By Carla Rudder
March 01, 2021

Each month, through our partnership with Harvard Business Review, we refresh our business library for CIOs with five new HBR articles we believe CIOs and IT leaders will value highly.

x

Email Capture

Keep up with the latest thoughts, strategies, and insights from CIOs & IT leaders.