Responsible AI by design: Building a framework of trust

As AI becomes increasingly mainstream, responsible AI practices are more critical than ever. Check out these tips on how to implement responsible AI by design
1 reader likes this.

The opportunities and the market for artificial intelligence are growing rapidly, and organizations are increasingly relying on it to improve productivity and profitability. However, AI deployment is not without risks – including customer privacy, bias, and security concerns. As AI becomes more embedded in decision-making, the potential to amplify both the positive and negative impacts of decisions at scale escalates.

To fully realize the transformative potential of AI, we must responsibly harness the technology and establish a framework of trust. Resources and guidance can help organizations avoid potentially harmful situations and look optimistically at how this disruptive technology can benefit humanity.

It’s essential to develop a comprehensive responsible AI framework that includes practices, tools, governance, responsibilities, and more. This framework should enable responsible AI by design, resulting in increased transparency and trust across the AI lifecycle.

Learning from the not-so-distant past

AI has seen its share of failures and poor implementations. Several public cases of intentional and unintentional overreaching have raised concerns.

Let’s start with a cautionary tale: European authorities have fined a facial recognition firm nearly €50 million, and several additional lawsuits are pending in the U.S. because the company has been accused of selling access to billions of facial photos, many culled from social media without the knowledge or consent of the individuals.

This controversial business model is a clear example of the misuse of data to train AI and demonstrates some potential negative consequences of AI: ethical and financial risks, breach of privacy and trust, erosion of brand value, steep fines, and legal trouble.

[ Related read: 3 steps to prioritize responsible AI. ]

Often, disparities and poor outcomes of AI decisions only come to light through retrospective analysis, audits, and public feedback – long after the damage is done. Responsible AI requires a design-first approach that considers stakeholders, transparency, privacy, and trust before implementation, throughout the lifecycle, and across business roles.

Barriers to responsible AI

Responsible AI practices have not kept pace with AI adoption for various reasons. Some firms put responsible AI on hold because of legislative uncertainty and complexity, thus delaying value realization on business opportunities. Other challenges include concerns about AI’s potential for unintended consequences, lack of consensus on defining and implementing responsible AI practices, and over-reliance on tools and technology.

Responsible AI requires a design-first approach that considers stakeholders, transparency, privacy, and trust.

To overcome these challenges, it’s important to understand that technology alone is insufficient to keep up with the rapidly evolving AI space. Tools, bias detection, privacy protection, and regulatory compliance can lure organizations into a false sense of confidence and security. Overly defined accountability and incentives for responsible AI practices may look good on paper but are often ineffective. Bringing multiple perspectives and a diversity of opinions to technology requires a disciplined, pragmatic approach.

To adopt a responsible AI strategy, some key concepts must be kept in mind, starting with setting a strong foundation.

Putting responsible AI into practice

When creating AI-powered systems, organizations must set ground rules. Establishing a dedicated internal review body and putting safeguards in place as part of the organizational AI strategy and program help protect against intentional or unintentional misuse of the technology.

Trust is critical when it comes to implementing responsible AI by design. Organizations can instill trust as evidenced through principles, practices, and outcomes the company delivers in the market. Consistently and continuously scale Al with trust and transparency and operationalize Al throughout and beyond your organization.

The following guiding principles can help you establish a framework and strategy for responsible AI by design:

  • Human oversight and governance
  • Fairness, inclusiveness, and prevention of harm
  • Transparency and explainability
  • Reliability, safety, security, and respect for privacy

Trust starts with protecting data to ensure it is accurate, timely, and secure. Maintaining the right level of privacy and access to an organization’s data is foundational to any strategy and governance. Having the right safety nets in place to ensure checks and balances enables the freedom for innovation. The outcomes of these AI efforts can then be realized internally and externally through products and services, driving digital engagement, cost reduction, and new market opportunities.

Here are some actionable ways to translate these principles into responsible AI by design:

  • Establish a review board that represents cross-functional disciplines across the organization.
  • Create a governance structure focused on security, reliability, and safety.
  • Cultivate a culture of trust by employing guiding principles of fairness and inclusivity.
  • Identify and engage with stakeholders early and often in the process to identify and mitigate potential harm while driving business value and addressing customers’ needs.
  • Build a diverse work culture and promote constructive discussions to help mitigate bias.
  • Ensure design and decision-making processes are documented to the point where they can be reverse-engineered if an incident occurs to determine the root cause.
  • Prioritize and embed responsible AI into the process through fairness tests and explainable features that can be easily understood by internal teams and customers alike.
  • Develop AI observability practices and create a rigorous development process that values visibility across development and review teams.
  • Infuse an ethical lens into how teams adopt AI to ensure equitable and responsible outcomes.
  • Create a community ecosystem of support by collaborating with the AI community about the future of responsible AI.

Following these principles can help you champion responsible AI leadership and guide your teams to create equitable and responsible outcomes while using AI.

[ Also read AI ethics: 5 key pillars. ]

Engaging stakeholders

When engaging with stakeholders, ask targeted questions that seek perspective on any potential global implications of the proposed system. For example, applying AI systems in medical applications might be inappropriate or even harmful. The systems could reveal private medical details without a patient’s consent or mistakes in patent diagnosis and treatment options.

Such challenges cannot be mitigated through a purely technological lens. The policy approach to the application and deployment of AI, and the model itself, determine these outcomes. Therefore, it is imperative to engage a diverse group of stakeholders and seek information about any potential implications.

Often, practitioners engage with stakeholders to gauge the efficacy of user interactions or determine a system's usability. View stakeholder feedback as an opportunity to learn and mitigate the unintended consequences of technology as much as to learn and develop for the intended outcomes. Establishing a review board that reflects diverse perspectives for ongoing project review can help build a culture of responsible AI.

With AI regulations on the horizon, organizations will be better prepared if program guardrails and processes are already in place. By establishing clear processes, communicating the capabilities and purpose of AI systems, and explaining decisions to those directly and indirectly affected, companies can fully realize the potential of AI.

Calls to action

As AI technologies and regulation continue to evolve, there are several ways companies can employ responsible AI by design:

  • Partner with a trusted service provider on your AI strategy and program.
  • Create an ethical foundation for AI practices by establishing guiding responsible AI principles and an AI governance system.
  • Ensure that your organization’s teams follow responsible AI practices throughout the AI lifecycle, including assessment, development, deployment, and monitoring, to ensure checks and balances.
  • Champion responsible AI by participating in global consortia and communities.
  • Establish a review board to consult on the broader impacts of AI systems.
  • Ensure that AI outcomes are clearly explained by the input data and are understandable to the broader team and consumers.
  • Conduct regular audits of deployed AI systems and their decisions by searching for evidence of disparate treatment, inequitable outcomes, and other unintended consequences. If you find an issue, retrain the system and remediate it when possible.

Even outside your organization, you can champion responsible AI as a private citizen in the following ways:

  • Seek brands that value trust, transparency, fairness, and responsibility in their systems.
  • Look for partners committed to responsible AI and demonstrate it in their products.
  • Advocate for AI governance and accountability and encourage lawmakers to enact regulations regarding the use of AI.
  • Engage in professional development and/or organizations that help define standards for the ethical use of technology.

[ Want best practices for AI workloads? Get the eBook: Top considerations for building a production-ready AI/ML environment. ]

christina_mongan_unisys
Christina Mongan, director of emerging technologies on the innovation ecosystem and emerging technologies team at Unisys, is focused on driving innovation, transformation and the adoption of emerging technologies to meet client and business needs.
simon_price_unisys
Simon Price, Ph.D., senior director of data and AI, leads the data science and data engineering practice in the Unisys Automation Hub. His team helps organizations leverage data and AI to drive fully automated decision-making, augmenting with human-in-the-loop decisions where necessary.

Contributors