AI's 5 biggest risks: Early implementers speak

Leaders on the front lines implementing AI don't worry about robot overlords: They worry about real-world risks ranging from financial to ethical
438 readers like this.

For years, people have held abstract fears about AI – from scores of lost jobs to out of control robots. But as AI has evolved and more companies experiment with the technology, the fears have also evolved – as risks become more apparent.  

A new report from Harvard Business Review Analytic Services, "An Executive's Guide to Real-World AI," offers insights and first-hand accounts from top chief information and digital officers who are leading the charge in their organizations, ranging from Raytheon to Capital One. According to the report, the top concerns and risks associated with AI break down into five categories:

  • Strategic/financial: Will a new AI-based product or business model pay off? Should we turn a multimillion-dollar decision over to an AI system?
  • Reputational: How much automation do we want in our customer processes? What’s the impact if things go wrong?
  • Legal/regulatory: Does our AI comply, and can we prove how decisions and actions happened?
  • Ethical: Have we defined our standards, and are we building our AI in a way that will align with them? How are we managing bias?
  • Societal: When do we reach a tipping point in automation and the resultant impact on jobs? What could or should we do about that?

These fears are part of the reason that many early AI and automation use cases are limited to the task level, with final decisions and actions performed by humans, notes the report. But leaders in AI are addressing the concerns head on in their work. 

“People get nervous when they don’t understand what the machine is doing."

“People get nervous when they don’t understand what the machine is doing,” says Bryson Koehler, CTO of Equifax, in the report. “That can be anything from your Nest thermostat making a decision around when to turn on or off, or it could be more complex, like your car in auto-drive mode. People like to understand what’s happening.”

As a financial services company, Equifax needed to be able to show how decisions were being made to address concerns around AI. According to the report, it developed a patented approach to “help provide the reasons, the factors, the weightings, and the insight that go into the output of a credit decision made.” As Koehler explains in the report, “It allows the end user, whether that’s a data scientist or a loan officer, to actually see how those weights were applied and impacted the decision their institution has made.”

As AI pioneers like Equifax and others find ways to address and mitigate risks and concerns like these, one fear still remains for those companies yet to get started with AI. According to the report, organizations that are not yet pursing AI risk never catching up with competitors. 

Download the report, “An Executive's Guide to Real-World AI: Lessons from the Front Lines of Business” for more real-world insights and examples from companies that are leading the way. 

Carla Rudder is a community manager and program manager for The Enterprisers Project. She enjoys bringing new authors into the community and helping them craft articles that showcase their voice and deliver novel, actionable insights for readers.