AI bias: 9 questions leaders should ask

AI bias: 9 questions leaders should ask

Artificial intelligence bias can create problems ranging from bad business decisions to injustice. Use these questions to fight off potential biases in your AI systems

up
33 readers like this, including you

on

January 29, 2019
Crossing the gap to big data

Management questions to ask about AI bias

If you’re actually worried about those robot overlord scenarios, then it’s probably a good idea to have some controls in place to prevent them from coming true. More seriously, ongoing monitoring, auditing, and optimization of AI applications is absolutely necessary to guard against systemic bias. Like security and other IT concerns, this is an indefinite project.

7. What proportion of resources is appropriate for an organization to devote to assessing potential bias?

“A thoughtful organization will scale the resources dedicated to assessing bias based on the potential impact of any bias and the sensitivity to potential bias of the team charged with developing the AI system,” says Northman, the Tucker Ellis attorney.

This is itself an ongoing project of analysis and optimization.

“As a rule, some of the most important assessment will take place following deployment,” Northman says. “It is critical that an organization study and evaluate the results the AI system produces.”

8. Have you thought deeply about what metrics you use to evaluate your work?

Bottom-line metrics like revenue or traffic are more fertile terrain for AI bias.

AI needs measurement – like any other worthwhile IT endeavor. Glaser from Periscope Data notes that short-term, bottom-line metrics like revenue or traffic are more fertile terrain for AI bias.

“AI systems are less likely to go off the rails if the team is evaluating them using long-term holistic metrics, including pairing metrics so that they balance each other out,” Glaser says.

9. How can we test for bias in training data?

McFarland notes that AI success, as with so many other IT systems, requires ongoing testing, though he acknowledges that teams might encounter challenges here, including costs or population assessment accuracy. But not testing at all is leaving the door wide open for bias.

Frequency bias is a common issue with AI; it’s also one you can test for in your training data and on an ongoing basis.

“Compare counts of different population types in your training data and compare that to the actual distribution of your target population,” McFarland says. “Any skews in the counts can indicate a bias for or against a certain population type.”

Coming full circle, human biases are another common issue. You can test for those, too.

“Have several experts summarize the same document and then compare their results,” McFarland says. “If you have a good summarization framework, these summaries will be mostly interchangeable. Looking for variances across these summaries can help identify biases in the experts.”

[ How can automation free up more staff time for innovation? Get the free e-book: Managing IT with Automation. ] 

Pages

7 New CIO Rules of Road

Harvard Business Review: IT Talent Crisis: Proven Advice from CIOs and HR Leaders

CIOs: We welcome you to join the conversation

Related Topics

Submitted By Jeremiah Cruit
March 22, 2019

As automation touches more of your organization, security will be far from automatic. Bots’ privileges need close scrutiny, for example.

Submitted By Enterprisers Project
March 22, 2019

Is that IT manager salary competitive? Let’s explore data points on salaries - and options that may help you earn more.

Submitted By Carla Rudder
March 21, 2019

If you are job hunting, you’ll hear this advice over and over: “Tap your network!”

x

Email Capture

Keep up with the latest thoughts, strategies, and insights from CIOs & IT leaders.