Artificial intelligence: 3 ways to prioritize responsible practices

The question of how to use artificial intelligence (AI) tools responsibly and without bias remains largely unanswered. As you develop your AI strategy, consider these ethical best practices
3 readers like this.
artificial intelligence project failures

The question of how to use AI responsibly has been a hot topic for some time, yet little has been done to implement regulations or ethical standards. To start seeing real industry change, we need to shift from simply discussing the risks of unbridled AI to implementing concrete practices and tools.

Here are three steps practitioners can take to make responsible AI a priority today.

1. Check for model robustness

AI models can be sensitive. Something as minor as capitalization can affect a model’s ability to process data accurately. Accurate results are foundational to responsible AI, especially in industries like healthcare. For example, a model should understand that reducing the dose of a medication is a positive change, regardless of the other content presented.

Tools like CheckList, an open source resource, look at failure rates for natural language processing (NLP) models that aren’t typically considered. By generating a variety of tests, CheckList can generate model robustness and fix errors automatically. Sometimes it’s as easy as introducing a more pronounced sentiment to training data – “I like ice cream VERY much” instead of “I like ice cream” – to train the models. While different statements, the model can be trained that both are positive.

[ Also read AI ethics: 4 things CIOs need to know. ]

2. Identify label errors automatically

Most widely used datasets make some serious mistakes in labeling. Training datasets usually include noise and errors. For example, computer vision could identify a lobster as a crab. Cleanlab, another open source tool, automatically finds errors in any machine learning dataset. It uses Confident Learning, leveraging all the useful information to find noise in the dataset and improve the error rate.

Identifying labeling errors is a big deal that can compromise the quality of the datasets. Cleanlab can automatically identify cases with wrong labels and propose a correction. By evaluating the labels that seem to conflict with the rest of a data set, Cleanlab can predict which labels are most likely to be wrong, making it easier for people to find and fix errors.

3. Detect and mitigate bias

Combating bias is one of the trickier aspects of delivering responsible AI solutions, as humans and systems are inherently biased. For example, you can learn the gender of a patient easily from their medical record, even if it doesn’t explicitly state male or female, from the “she” or “he” pronouns it uses. Age is also straightforward. Factors like race, ethnicity, and social or environmental factors that can affect health, however, are also vitally important but not as easily discernible.

The problem is that data reflects real-world patterns of health inequality and discrimination. Lack of representation affects training data and thus results in biased AI design and deployment practices.

While systemic problems are at the root of this, NLP can help. In healthcare, for example, it can help practitioners make sense of structured and unstructured data and create a more complete and accurate picture of each patient. This model will also learn and improve over time. Fortunately, there are many open source tools available on the market: Here are 12 options to explore.

There’s still much work to be done to ensure that AI is used responsibly. Until standardized industry requirements are in place, enterprises must take matters into their own hands. To bring responsible AI beyond slideware, it’s time to start implementing ethical practices today. Checking for model robustness, identifying labeling errors, and detecting and mitigating bias are good ways to start.

[ Want best practices for AI workloads? Get the eBook: Top considerations for building a production-ready AI/ML environment. ]

David Talby, PhD, MBA, is the CTO of John Snow Labs. He has spent his career making AI, big data, and Data Science solve real-world problems in healthcare, life science, and related fields.