Bad, biased, and unethical uses of AI

CIOs should be aware of these 4 examples of unethical AI - and understand their role in ensuring that enterprise AI remains neutral
446 readers like this.

Adoption of AI technology is accelerating rapidly. Gartner forecasts that by 2020, AI will be a top-five investment priority for more than 30 percent of CIOs. A McKinsey study estimates that tech companies are spending between $20 and $30 billion on AI, mostly in research and development.

While the social utility of AI technology is compelling, there are legitimate concerns, as raised by The Guardian’s Inequality Project: “When the data we feed the machines reflects the history of our own unequal society, we are in effect asking the program to learn our own biases.”

Unfortunately, examples of bad, biased, or unethical uses of AI are commonplace. Here are just four examples that every CIO should be aware of, along with advice on how enterprises can remain neutral.

1. Mortgage lending

The mode of lending discrimination has shifted from human bias to algorithmic bias. A study co-authored by Adair Morse, a finance professor at the Haas School of Business, concluded that “even if the people writing the algorithms intend to create a fair system, their programming is having a disparate impact on minority borrowers — in other words, discriminating under the law.”

“When the data we feed the machines reflects the history of our own unequal society, we are in effect asking the program to learn our own biases.”

[ Are you asking the right questions when it comes to systemic bias? Read also AI bias: 9 questions leaders should ask. ]

You might assume that redlining, the systematic segregation of non-white borrowers into less-favorable neighborhoods by banks and real estate agents, is a thing of the past — but you would be wrong. Surprisingly, the automation of the mortgage industry has only made it easier to hide redlining behind a user interface. In his recent book “Data and Goliath,” computer security expert Bruce Schneier recounts how in 2000, Wells Fargo created a website to promote mortgages using a “community calculator” that helped buyers find the right neighborhood. The calculator collected users’ current ZIP code, assumed their race according to the demographics of their current neighborhood, and recommended only neighborhoods with similar demographics. And earlier this year, HUD brought suit against Facebook for racial biases in housing and mortgage advertisements.

2. Human resources

By far the most infamous issue with bias in recruiting and hiring came to public attention when Reuters reported that Amazon.com’s new recruiting engine excluded women.

According to Reuters, Amazon assembled a team in 2014 that used more than 500 algorithms to automate the resume-review process for engineers and coders. The team trained the system by using the resumes of members of Amazon’s software teams – which were overwhelmingly male. As a result, the system learned to disqualify anyone who attended a women’s college or who listed women’s organizations on their resume.

More and more companies are adopting algorithmic decision-making systems at every level of the HR process. As of 2016, 72 percent of job candidates’ resumes are screened not by people, but entirely by computers. That means job candidates and employees will be dealing with people less often – and stories like Amazon’s could become more common. 

However, the good news is that some companies are making efforts to eliminate potential bias. ABBYY founder David Yang co-founded Yva.ai, an analytics platform that is specifically designed to avoid algorithmic bias by avoiding the use of any indicator that could lead to bias, such as gender, age, or race, even when such indicators are secondary (such as involvement in women’s activities or sports), secondary (such as names or graduation dates), or even tertiary (such as attendance at elite colleges, which has been increasingly called out as a signifier of bias against minorities). 

The good news is that some companies are making efforts to eliminate potential bias.

In another example, LinkedIn, now owned by Microsoft, has deployed systems not to ignore but instead to collect and utilize gender information in LinkedIn profiles. LinkedIn then uses this information to classify and correct for any potential bias.

3. Search

Even basic Internet searches can be tainted with bias. For example, UCLA professor Safiya Umoja Noble was inspired to write her book “Algorithms of Oppression” after googling “black women” in a search for interesting sites to share with her nieces, only to find pages filled with pornography. Meanwhile, searches for “CEO” have historically shown image after image of white men.  (Fortunately, our own more recent experiences on Google suggest that the CEO problem is being addressed.)

Other features of Google search, such as AdWords, have also been guilty of bias. Researchers from Carnegie Mellon University and the International Computer Science Institute discovered that male job seekers were much more likely to be shown advertisements for high-paying executive positions than were women. Google Translate has also been called out for sexism in translating some languages, assuming, for example, that nurses are women and doctors are men.

4. Education

In what may well be the earliest reported instance of a tainted system, a 1979 program created by an admissions dean at St. George’s Hospital Medical School in London ended up accidentally excluding nearly all minority and female applicants. By 1986, staff members at the school became concerned about potential discrimination and eventually discovered that at least 60 minority and female applicants were unfairly excluded each year.

You might wonder why it took so long to raise the alarm, considering that according to reports, simply having a non-European name could automatically take 15 points off an applicant’s score. The prestigious British Medical Journal bluntly called this bias “a blot on the profession.” Ultimately, the school was mildly penalized, and it did offer reparations, including admitting some of those applicants who were excluded.

The CIO’s role 

Leading tech companies are making efforts to address the ethical use of data. Microsoft, for example, has developed a set of six ethical principles that span fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. Meanwhile, Facebook recently granted $7.5 million dollars to the Technical University of Munich to establish an Institute for Ethics in AI. Other tech companies have subscribed to the Partnership on AI consortium and its principles for “bringing together diverse global voices to realize the promise of artificial intelligence.”

AI is only as good as the data behind it, so this data must be fair and representative of all people and cultures.

If AI applications are to be bias-free, companies must support a holistic approach to AI technology. AI is only as good as the data behind it, so this data must be fair and representative of all people and cultures.

Furthermore, the technology must be developed in accordance with international laws. This year’s G20 Summit finance ministers agreed, for the first time, on G20’s own principles for responsible AI use. This included a human-centric AI approach, which calls on countries to use AI in a way that respects human rights and shares the benefits it offers.

On the most simplistic level, CIOs need to question if the AI applications they are building are moral, safe, and right. Questions may include:

  1. Is the data behind your AI technology good, or does it have algorithmic bias?
  2. Are you vigorously reviewing AI algorithms to ensure they’re properly tuned and trained to produce expected results against pre-defined test sets?
  3. Are you adhering to transparency principles (such as GDPR) in how AI technology impacts the organization internally and customers and partner stakeholders externally?
  4. Have you set up a dedicated AI governance and advisory committee that includes cross-functional leaders and external advisers that will establish and oversee governance of AI-enabled solutions?

Ultimately, the ethical uses of AI should be considered a legal and moral obligation as well as a business imperative. Don't become another example of bad and biased AI. Learn from these unethical use cases to unsure your company's AI efforts remain neutral. 

[How do AI and machine learning systems make decisions? Read What is explainable AI? ]

Anthony Macciola is the Chief Innovation Officer at ABBYY, a Digital Intelligence company. He holds more than 45 patents for technologies in mobility, text analytics, image processing, and process automation, and is leading AI initiatives at ABBYY.