ChatGPT: Not yet a panacea for IT support

ChatGPT, in its current state, offers both value and limitations for tech support teams. Here's where it holds promise – and where there's room for improvement
1 reader likes this.
A robotic hand outstretched. Various digital icons appear in a cloud above the hand

In their first blog about ChatGPT, here’s how OpenAI described it: “We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.”

Admitting mistakes and rejecting inappropriate requests is a quantum leap forward for tech teams using AI. Compare this with Microsoft’s conversational bot, Tay, introduced in 2016. It was designed to engage people in dialogue while emulating the style and slang of a teenage girl. Within a day of its release, Tay tweeted more than 95,000 times, many of which were abusive and offensive.

Tay had a fundamental problem: It was designed to learn about language through interactions with people. Tay was shut down less than 24 hours after its release.

Seven years later, we are smarter, and so is AI. ChatGPT is intelligent and sophisticated – and it’s getting smarter and more sophisticated as you read this. Here’s what we are learning about how it’s being used by tech and engineering teams in operations and security.

Operations

Meeting management

ChatGPT is excellent for converting meeting transcripts into actionable notes, offering capabilities that go far beyond those of Gong, Zoom, and Teams. It is a super-synthesizer of information, so teams can spend less time creating recaps and more time doing productive work. It not only quickly and accurately turns notes into specific, accurate action items but it also identifies open issues that will require resolution in the future.

[ Related read: ChatGPT: 3 ways it will impact IT support. ]

Of course, ChatGPT's effectiveness depends on the accuracy of the information it's provided. A transcript with spelling mistakes will be converted to errors – for example, in one of our meetings, a transcript heard “ChatGPT” as “Chad GBT.” But with inefficient meetings costing organizations up to $100 million per year, improving meeting management is a big plus for tech teams and organizations.

Customer service and support

ChatGPT can handle routine questions without offending, but companies still need to ensure that humans perform regular quality checks and correct inaccuracies. 

Defining OKR/KPIs

ChatGPT can assist with defining KPIs, offering insights into efficiency, quality, and even engineering metrics and other data points. Given historical data, ChatGPT can also help predict measured metrics like velocity. It can quickly identify accurate KPIs for organizations based on industry benchmarks, domains, and challenges.

Overall problem-solving

Once it’s learned a specific technology stack, ChatGPT can help with budget and cost estimating, traffic, and expected growth, exploring existing licenses and subscriptions to identify, for example, which ones are redundant for an organization.

It can help the IT teams determine if there are better/cheaper alternatives to proposed solutions. However, it’s a good idea to perform a human-driven analysis in parallel to ensure the best solutions using traditional operation management methods.

Our team leveraged ChatGPT to analyze compute, storage, and available performance per tier versus the actual cost to determine a better return on investment in our use cases.

RFPs

ChatGPT can help create a personalized RFP for a customer to ensure success, although it’s best to execute initial analyses in parallel using proven legacy methods. In addition, ChatGPT sources data from the internet only until 2021, so it has limited knowledge of more current information. To create accurate recommendations, it must be fed current, specific data about your company and your client’s issues.

Security

Incident response

ChatGPT can be trained to help IT teams quickly triage security incidents and determine the appropriate response based on the threat severity. It can also help with post-incident analysis, quickly identifying insights on how an attack occurred and how to prevent similar attacks in the future.

Access control

Training ChatGPT on access control rules can provide automated feedback on whether access requests are valid or not, which can quickly identify and block unauthorized access attempts.

Compliance

ChatGPT can help ensure compliance with security policies and regulations by analyzing data from disparate sources and pinpointing areas where the organization is non-compliant and/or at risk. It can also recommend remediation steps.

Phishing detection

ChatGPT can be trained to unearth phishing emails and other social engineering attacks. As it learns, it can analyze the email content compared to known (learned) phishing tactics, warning IT teams before employees make disastrous clicks.

Operative words

For IT teams, the operative words in using ChatGPT are “can be trained” and “human quality checks.”

By implementing use cases, we have found that IT teams need to work with data scientists and machine learning experts to train ChatGPT on relevant data and to integrate it with existing security tools and workflows.

Additionally, as ChatGPT is implemented, it needs security and quality checks. It could be attacked or have the wrong data or behaviors. ChatGPT could also fail at IT support due to its inability to understand complex technical problems, its lack of knowledge about specific IT systems, and its inability to provide personalized advice.

Company culture and lexicon are additional challenges; ChatGPT must be taught these details. Limited by the scope of artificial intelligence, ChatGPT will not consistently be able to offer a level of technical expertise comparable to that of a real IT or tech support staff member.

ChatGPT could still be a boon for IT – but it needs human oversight. The technology is still in the “buggy whip” era, so we must watch it, just like an automatic pilot in an airplane or car. If we fail to monitor it, it could run amok as Tay did.

[ Want best practices for AI workloads? Get the eBook: Top considerations for building a production-ready AI/ML environment. ]

rachel_shehori_optimove
Rachel Shehori is EVP of IT at Optimove. She joined in April 2019 as VP R&D. Before joining Optimove, Rachel held various positions at Hewlett-Packard, including Director of Engineering, SaaS HPE Software. Rachel was also an R&D Consultant for an Aerospace Solutions and Engineering company in Israel.