Generative AI: 3 do's and don'ts for IT leaders

As generative AI increasingly finds its way into enterprise use, IT leaders must provide guidance. Here are some key factors—and recommendations—to consider.
34 readers like this.
AI artificial intelligence

Arthur C. Clarke’s third law states that “any sufficiently advanced technology is indistinguishable from magic.” The magically advanced technologies of generative AI and generative pretrained transformers (GPTs) like ChatGPT don’t disappoint.

Over the next few years, enterprises will fall somewhere on the spectrum of fully embracing and shunning the technology outright as part of their digital transformation journey. As IT leaders, we need to help our organizations and their employees understand its benefits and risks and make informed decisions on how to apply it appropriately.

Employees use the technologies whether enterprises endorse them or not, so providing guardrails is critical to ensure your organization isn’t at risk. Here are some considerations to identify when and how your organization should use generative AI.

1. Do know the product is you

Similar to how you scrutinize terms of service for what cloud providers do with your organization's data, you need to know how the data from generative AI is collected and used. In the case of generative AI, the datasets used aren’t just yours and could include copyrighted material that could find its way into your employees’ work products.

[ Also read How artificial intelligence can inform decision-making. ]

Plus, the generative AI may use your employee’s inputs to train it further. Proprietary information, trade secrets, and sensitive customer data “steak” could become a part of the training models’ “hamburger.” Worse, one could potentially extract original data from the training data, turning the hamburger back into a steak.

Recommendation: Ensure the data usage terms are compatible with its intended uses.

2. Don't copy, paste, and ship

A GPT is no more an authority on a subject than an actor playing a role in improvisational theater.

Assuming the usage terms are acceptable to your organization, users should still view generative AI’s output with a skeptical eye. The output could be biased based on the generative AI’s training data, or it could be outdated or even just plain wrong.

Technically, GPTs are trained on a large text corpus to predict the next word in a passage. As the corpus gets larger, the predictions are generally better, but predictions can still be incorrect and even fail spectacularly. The purveyors of generative AI services continuously improve their models to root out harmful outputs, but those improvements could have unintended consequences.

For example, an organization may have modified its GPT model to disallow criticism of the organization or its country’s government. However, the organization may not have explicitly changed the model to disallow criticism of rivals, leading to skewed results.

[ Also read 6 precautions for CIOs to consider about ChatGPT ]

A GPT is no more an authority on a subject than an actor playing a role in improvisational theater. As such, users should take the output with a grain of salt and do their own fact-checking. Responsible AI guidelines around explainability will make fact-checking easier and enhance trust. GPTs are excellent for brainstorming, overcoming writer’s block, and summarizing, but a critical review of the outputs is essential to ensure correctness.

Recommendation: (Mostly) trust, but verify.

3. Do start your generative AI journey with low-risk situations

The best way to understand generative AI is to experiment with it in low-risk situations. Many services are freely available, with more options coming online every day. Experimentation will demystify the magic and hype and let you push the AI’s boundaries to see where it falls short.

Recommendation: Pick a practical project like using ChatGPT to optimize your LinkedIn profile to get a taste of how your organization can put it to work.

The rise of generative AI has introduced a new level of technological capability that can be both fascinating and concerning. While the benefits are vast, our responsibility as IT leaders is to understand the risks and guide our organizations and employees in making informed decisions about the appropriate use of these tools.

By starting with low-risk experimentation and taking a critical approach to output, you can demystify the magic of generative AI and uncover its full potential in enhancing your work and daily life.

[ Want best practices for AI workloads? Get the eBook: Top considerations for building a production-ready AI/ML environment. ]

What to read next

David Egts
David Egts is MuleSoft's first-ever Public Sector field CTO. He is the executive-level connective tissue between MuleSoft and top public sector decision makers and influencers globally.