Artificial intelligence: 3 tips to ensure responsible and ethical use

Even as AI becomes more ubiquitous in our everyday lives, security and bias concerns remain. Here are some best practices to help organizations reduce bias and protect privacy
1 reader likes this.

Artificial intelligence (AI) already impacts our daily lives in ways we never imagined just a few years ago – and in ways that we’re unaware of now. From self-driving cars to voice-assisted devices to predictive text messaging, AI has become a necessary and unavoidable part of our society, including in the workplace.

Data shows that the use of AI in business is increasing. In 2019, a Gartner report stated that 37% of organizations had implemented AI in some capacity. Most recently, Gartner predicted that the global AI software market would be worth $62.5 billion by the end of this year, a 21% jump from the previous year.

While AI’s impact is undeniable, consumers’ concerns about the ethics and security of AI technology persist. Because of this, companies must strive to alleviate these concerns by always protecting customer data when employing AI-enabled technology.

The need for responsible AI

Any consumer-facing organization that employs AI technology must act responsibly, especially when customer data is involved. Tech leaders using AI must give equal focus to two responsibilities at all times: reducing the biases of the models and preserving the confidentiality and privacy of data.

[ Also read Artificial intelligence: 3 ways to prioritize responsible practices. ]

Along with ensuring data security, responsible AI practices should eliminate biases embedded in the models that power it. Companies should regularly evaluate bias that may be present in their vendors’ models, then advise customers on the most appropriate technology for them. This oversight should also correct biases with pre-and post-processing rules.

While companies cannot remove the biases inherent to AI systems trained on large quantities of data, they can work to minimize adverse effects. Here are some best practices:

1. Put people first

AI can be beneficial in reducing the amount of repetitive work carried out by humans, but humans should still be prioritized. Create a culture that doesn’t imply an either/or scenario between AI and humans. Tap into human teams’ creativity, empathy, and dexterity, and let AI create more efficiencies.

Tap into human teams’ creativity, empathy, and dexterity, and let AI create more efficiencies.

2. Consider data and privacy goals

Once the goals, long-term vision, and mission are put in place, ask yourself: What does the company own? There are numerous foundation models or solutions that can be used without any training data, but in some cases, the degree of accuracy could be much higher.

Adapting AI systems to the company goals and data will yield the best results. Done correctly, data preparation and cleaning can remove biases during this step. Removing bias from data is key to developing responsible AI solutions. You can remove features that impact the overall result and further perpetuate existing biases.

On the privacy front, commit to protecting all the data you collect, regardless of how massive the amount is. One way to do this is to work only with third-party vendors who strictly follow the stipulations within crucial pieces of legislation, such as GDPR, and maintain critical security certifications, such as ISO 27001. Adhering to these regulations and earning these certifications take significant effort, but they demonstrate that the organization is qualified to protect customer data.

3. Implement active learning

Once a system is in production, provide human feedback on the technology’s performance and biases. If users detect that output differs depending on the scenario, create guidelines for reporting and fixing those issues. This can be done at the AI system’s core as a correction to the output.

In recent years, some of the world’s largest organizations, including Google, Microsoft, and the European Commission have built frameworks and shared knowledge of their responsible AI guidelines. As more organizations adopt company language related to responsible AI, it will become the expectation from partners and customers.

When one mistake can cost your brand millions of dollars or ruin its reputation and relationship with employees and customers, additional support helps. No one wants to work with an organization that is careless with their customer’s data or that uses biased AI solutions. The sooner your organization addresses these issues, the more consumers will trust you, and the benefits of using AI will start rolling in.

[ Check out our primer on 10 key artificial intelligence terms for IT and business leaders: Cheat sheet: AI glossary. ]

diego_bartolome_language-IO
Diego Bartolome is chief technology officer at Language I/O, with 16 years of experience at the intersection of language, computers, and technology, assisting companies in communicating in spoken languages.