6 precautions for CIOs to consider about ChatGPT

What are the potential pitfalls of this buzzworthy AI tool? Our community of experts share risks for CIOs to examine
2 readers like this.

Did you read our recent piece on 9 ways CIOs can use ChatGPT? Well, there’s often a flipside to exciting new technology. While there are many potential benefits to ChatGPT, it would behoove CIOs to treat it more like a toy than a reliable tool until the future of AI technology is less murky.

“ChatGPT can be highly useful in completing tedious tasks and relieving administrative strain, but the quality of its output depends on the quality of its input – which means it’s capable of error if fed incorrect or incomplete data and instructions,” warns Petr Baudis, CTO & Chief AI Architect at Rossum.

Like most AI, it cannot understand the nuance and complexities of human conversations, so any insights it spits out are likely shallow. Simon Price, Senior Director of Data and AI at Unisys says, “ChatGPT has no real understanding of what it’s writing; it’s a very smart autocomplete."

What are some other precautions to consider before going all-in on ChatGPT? We asked our community to share their thoughts about the potential risks for CIOs. Here’s what they said.

Misinformed security teams

"Security teams can use ChatGPT to review software source code for security bugs. Early sample testing shows that ChatGPT results can be extremely accurate and efficient. This creates the notion that ChatGPT may be consumed as a replacement for secure code reviews and software defect identification. That said, the quality and accuracy of the models and their results could misinform security teams to make poor decisions, miss many security vulnerabilities, and, worst of all, record the code sample in such a way that makes ChatGPT itself an unintended source code repository to be breached by bad actors.

Much remains to be learned about how much enterprises can rely on ChatGPT for such security efforts. More importantly, it is unknown what ChatGPT will store and share itself, making it an untouchable enterprise asset for the short term. Nonetheless, the brightest minds in security are now envisioning how to employ ChatGPT for better security." -Karl Mattson, CISO, Noname Security

Malicious data collection and phishing risks

“CIOs should be aware that bad actors will quickly learn how to use this chatbot as a weapon and open a new era of cyberattacks and malicious data collection. Already, reports have shown its ability to promote financial schemes and scams, taking advantage of social engineering.

CIOs are well aware of frequent phishing attempts on their organizations. They should anticipate that ChatGPT may become the brains behind real-time phishing techniques, replacing poor grammar emails and texts while actively mapping real-world data to the target recipients en-masse. This is likely to be the first frontier of accelerated cyberattacks. Today, ChapGPT may not be able to write malicious malware entirely, but it’s likely to come. It will be up to CIOs to use the power of AI and ChatGPT to fight bad actors.” -David Bennett, CEO, Object First

False sense of trust

“The success of ChatGPT in a consumer capacity is clear. And since its language model is effectively trained on all the text on the web, the output is so fluent that it can be challenging, if not impossible, to decipher whether the author is human or a bot. But in a higher-stakes enterprise context, the fluency of the response creates a false sense of trust for the user: It looks right. It sounds right. But if you run the numbers, it might be wrong. When you’re trying to make your friends laugh, accuracy doesn’t matter at all. But when trying to make a decision that could impact lives and livelihoods, that unreliability can have catastrophic consequences.

In a sense, ChatGPT is similar to Google Translate. While Google Translate is great for travel and other casual use cases, an enterprise won’t trust it to faithfully translate their marketing materials for new geographies or their contracts for international agreements. The stakes are just too high to gamble on a statistical model.

Successful applications will require organizations to train and fine-tune a model like ChatGPT on proprietary enterprise information to help it interpret and produce the “language” used within that organization. But more importantly, it’ll need to be taught how to map the lingo to the organization’s data systems (ultimately turning human language into the right SQL queries against the right database tables.) GPT technology plus a data catalog could make that science-fiction dream a reality.

While its massive ML model makes it sound super smart, ChatGPT will need help from humans and auxiliary technologies to be a smart choice for an enterprise.” -Aaron Kalb, Co-founder and Chief Strategy Officer, Alation

Neglecting human-centered support

“It’s essential to preserve and enhance the unique strengths of human employees, such as critical thinking and empathy. These skills are critical in complex problem-solving and delivering personalized, human-centered customer experiences. While optimistic about the potential of ChatGPT, technology leaders prioritize and remain focused on data privacy, security, and compliance. As AI continues to disrupt work as we know it, we must maintain a human-in-the-loop approach that focuses on integration, measuring impact, and leveraging the best of human and AI capabilities.” -Tamarah Usher, Senior Director, Strategy and Innovation, Slalom.

“ChatGPT is an interesting technology, but it can’t analyze data or be trusted to make logical conclusions. It’s trained on data; therefore, it cannot reason, theorize, or “think.” It can merely synthesize information to create text similar to the text it has been trained on. ChatGPT is more useful as a summarizer than as a creator of information. ChatGPT can provide an opportunity to accelerate the development of textual content, tailored to specific audiences, but, at most, it will produce “first drafts” that will need to be reviewed and curated by true experts.” -Jonathan LaCour, CTO, Mission Cloud

“We are excited about the potential for Generative AI technology, such as ChatGPT, as a tool for productivity. However, with new technology like this, we’re also concerned about potential issues:

  • The cost when this goes commercial. As it stands, people are getting hooked on the “free” versions

  • The legal or ethical implications of this platform. At this point, there’s little transparency and explainability.

  • Legal questions around ownership/copyright of materials produced with ChatGPT or similar AI programs.

  • The potential of inadvertently releasing proprietary information to ChatGPT and that data becoming part of their database.”
    -Suzanne Taylor, Vice President, Innovation and Emerging Technologies, Unisys

[ Want best practices for AI workloads? Get the eBook: Top considerations for building a production-ready AI/ML environment. ]

Katie Sanders
Katie Sanders is the Content and Community Manager for The Enterprisers Project, seeking contributors who have expertise that can be shared with an audience of CIOs and IT leaders. She has always been interested in building relationships and connecting people.