3 steps to prioritize responsible AI

As you develop your organization's artificial intelligence (AI) strategy, consider this advice to develop responsible AI practices
1 reader likes this.

Artificial intelligence (AI), continues to be a huge growth driver for companies of all sizes going through digital transformation, with IDC predicting that AI spending will reach $500B by 2023. AI is helping organizations identify new business models, increase revenue, and gain a competitive advantage.

But along with these opportunities, AI also brings immense responsibilities.

AI systems and pipelines are complex. They have many areas of potential failure, from issues with the underlying data to errors in the tools themselves to outcomes that adversely affect people and processes. As AI becomes ubiquitous, prioritizing its responsible use can mean the difference between growing revenue streams and ensuring individuals and communities are safe from risk and harm.

Based on these heightened risks, lawmakers are stepping in globally with guidelines on AI systems and their appropriate use. The latest comes from New York City, which states that employers using AI hiring tools will face penalties if their systems create biased decisions. As a result of these decrees, technology builders and governments must strike a delicate balance.

[ Also read AI ethics: 5 key pillars. ]

By following this simple three-step process, organizations can better understand responsible AI and how to prioritize it as their AI investments evolve:

Step 1: Identify common failure points; think holistically

The first step is realizing that most AI challenges start with data. We can blame an AI model for incorrect, inconsistent, or biased decisions, but ultimately, it is the data pipelines and machine learning pipelines that guide and transform data into something actionable.

As we think more broadly, the stress test can involve multiple points based on raw data, AI models, and predictions. To better evaluate an AI model, ask these key questions:

  • Was an under-sampled data set used?
  • Was it representative of the classes it was training against?
  • Was a validation data set used?
  • Was the data labeled accurately?
  • As the model hits new runtime data, does it continue to make the right decisions since it was trained on slightly different data?
  • Was the data scientist equipped with the proper training guidelines to ensure they fairly selected features to train and retrain the model?
  • Did my business rule take the appropriate next step, and is there a feedback loop?

By taking a broader perspective, organizations can think about safeguards regarding people, processes, and tools. Only then can a robust and adaptable framework be built to address the fractures in the systems.

Step 2: Build a multi-disciplinary framework

When it comes to creating a framework based on people, processes, and tools, how do we apply this to complex data and AI lifecycles?

Data lives everywhere – on-premise, in the cloud, at the edge. It’s used in batch and real-time across hybrid infrastructures to allow organizations to deliver insight closer to where that data resides. With low-code and automated tooling, machine learning algorithms are sophisticated enough to allow users with varying skill levels to build AI models.

A strong technology foundation is essential for meeting the evolving needs of the workloads using AI, as well as the AI technologies themselves – libraries, open source tools, and platforms. Common capabilities found in this foundation include, but are not limited to:

  • Bias detection in both AI model training and deployment
  • Drift detection in model deployment
  • Explainability at both model training and deployment
  • Anomaly and intrusion detection
  • Governance from data through policy management

More advanced platforms have remediation paths for many of the above capabilities to ensure that you have a path to address and fix beyond detection.

However, a technology-based approach is not enough. Responsible AI requires a culture to embrace the technology. This begins with executive management teams setting company-wide objectives to raise awareness and priority. Steering committees and officers dedicated to the responsible AI mission are critical to its success. It is becoming increasingly common for organizations to hire Chief AI Ethicists to orchestrate the company’s internal workings and external social mandates and responsibilities. They often can act as conduits helping manage the company’s societal responsibilities.

Step 3: Invest, build skill, and educate

In the final step, we need to have a bias (no pun intended) toward action.

To cope with the technology challenges associated with AI, investing in a diverse skill set is important, which is not an easy task. However, even educating internal teams on content developed by large technology companies like Google, IBM, and Microsoft can contribute to bootstrapping internal skill sets.

It’s also important to educate lawmakers and government officials. By regularly briefing and coordinating with local and federal agencies, policies can keep up with the rate and pace of technical innovation.

With AI becoming more pervasive, establishing guardrails – both for the technology and society – is more important than ever. Organizations seeking to understand, build, or use AI should prioritize responsible AI. These three steps will equip leaders to promote responsible AI within their organizations, build a foundation for it, and sustain it.

[ Want best practices for AI workloads? Get the eBook: Top considerations for building a production-ready AI/ML environment. ]

gaurav_rao_atscale
Gaurav Rao is Executive Vice President and General Manager of Machine Learning and AI, for AtScale. Before AtScale, Gaurav served as VP of Product at Neural Magic. Before Neural Magic, he served in a number of executive roles at IBM, spanning product, engineering, and sales. He is an advisor to data and AI companies, including DataRobot.