Is your organization ready to fully embrace containers? Consider these best practices to ease the transition as you adopt containers at scale.
AI chatbots for IT support: A CIO's lessons learned
Want to say goodbye to tickets? Not quite yet. Here's what one CIO learned while applying AI chatbots to IT help desk tasks
The days of submitting a ticket or picking up the phone and calling the IT service desk are on their way out. End users want quick, frictionless interactions that promptly resolve their problems. Enter AI chatbots, which are being billed as the future of IT support.
While virtual assistants offer a lot of promise, don't expect them to be a panacea for your service issues. At least not yet.
Here's what we've learned from our experience in pursuing AI based-chatbot for our IT help desk.
Exploring our machine learning options
Over the past year, our goal was to improve our IT help desk capabilities by introducing an AI chatbot that could resolve initial requests and improve speed and end user experience. We explored a variety of product options, including ones that already had machine learning and artificial intelligence capabilities built in.
Machine learning is really broken into three broad categories: First is supervised learning, in which you take an algorithm and teach it a variety of possible scenarios and outcomes. Second is experience-based learning, in which a lot of data is passed to the algorithm and you teach it to differentiate various patterns in the data. As it recognizes patterns, you teach it to assign outcomes to defined patterns. The third category is unsupervised learning. Here, you throw lots of data at the algorithm and the algorithm figures out patterns and outcomes.
While that third category of machine learning is the most talked about today, we discovered through researching various products that the first category — supervised learning — worked best for us.
[ Get a crash course: AI vs. machine learning: What’s the difference? ]
The product we chose focused solely on IT service management, which enabled us to learn from their customer’s experiences. We also had the ability to influence the product’s road map and priorities from an engineering perspective, which was attractive to us.
Once we chose the product, it was clear we had to make some decisions around scope. Typically, most algorithms need a lot of data to make good decisions or to learn the right outcomes. In IT service management, that data encompasses all of the incidents that have been reported during a particular time period. For some companies, that might be 10,000 or 20,000 incidents a year, which is just not enough data for a bot to recognize patterns and make right decisions on outcomes
We took 18 months of tickets and fed them into the product only to discover that often when tickets were initially created, the actual problem or root cause may have differed from the initial issue raised by the end user. This meant that we needed to curate all the data and categorize it in a way that made it easier for the chatbot to ingest. This process would have taken us months, and in the end, the data wouldn’t be as clean as the algorithm would need it to be to determine the correct outcomes.
Our best path forward from a self-learning perspective was to focus on password-related issues, which was the largest percentage of tickets we received — about 20 percent of our total tickets. From there, we performed a deep dive on teaching the chatbot to look at the different questions that end users might ask along with the related outcomes — everything from using a password, forgotten passwords, and resetting passwords, for example.
Launch and lessons learned
The service desk team spent two to three months testing the bot before launching a three-month pilot with IT and finance. During the pilot, we discovered that the natural language processing engine from the product was a little weak, so we made further modifications to the chatbot’s learning and outcomes.
We also summarized all of our knowledge-based articles that service desk technicians use and ingested them into the chatbot. In addition, we reviewed all the interactions that people had during the pilot to ensure that the bot could learn and answer questions correctly.
In early March, we pushed the chatbot to everyone in the company, and we’ve seen some really good initial results. As with any new product launch, adoption is key. We’ve been working hard on communicating and making sure people are aware that this chatbot is now available.
Through this entire process, we’ve learned a few things. First is that the IT service management space for intelligent chatbots is not very mature, so don’t expect robust solutions. In fact, vendors will claim that their product will automate 80 percent of your interactions; take that with not just a grain — but a bucket — of salt. We still have several years before products in this space mature.
Second, unless you have robust internal engineering and development resources, it’s generally not worthwhile to build your own solution. And finally, with assisted learning, there’s a significant time commitment required of internal subject matter experts within IT. We didn’t realize how much time it would take us every week to perform manual reviews during the initial testing and the pilot to teach the bot new skills.
While we’re happy with the initial feedback we’ve received on our chatbot, there is still a lot of work ahead of us. Our initial goal was to improve IT service desk and help desk capabilities with AI and ML, and we’re well on our way.
[ Want lessons learned from CIOs applying AI? Get the new HBR Analytic Services report, An Executive's Guide to Real-World AI. ]