5 AI adoption mistakes to avoid

Artificial intelligence (AI) and related technologies are becoming increasingly mainstream business tools. If you're planning an AI implementation, beware of these potential pitfalls
75 readers like this.
AI vs. NLP

Artificial intelligence (AI) and machine learning can be invaluable assets to business success. By implementing AI, businesses can automate hours’ worth of manual labor sifting through data to enable smarter and faster business decisions. However, automation and AI do not remove the need for human responsibility.

It’s important to follow best practices to ensure AI helps versus hurts your business. Here are five mistakes to avoid in leveraging AI to meet company goals.

1. Not identifying the correct use case

By now, many businesses realize the benefits of AI. In fact, if you don’t have automation somewhere in your company, you’re probably falling behind your competitors. According to a PwC study, 86 percent of respondents expect AI to become a ‘mainstream technology’ at their company in 2021.

Despite the proliferation of AI use, arbitrary attempts to implement AI for your business are ill-advised. It’s important to apply AI to the right use cases for the best outcomes. Instead of asking ‘Can I apply AI to this situation?’, ask ‘Am I applying the right AI to the right situation?’ AI implementation for certain business processes must ultimately be worth it in company time and resources. If AI is running incongruously with business goals, time and company resources will be wasted.

[ Read also: Artificial intelligence: What is an AI product? ]

2. Not hiring the right talent

The hiring landscape in tech is changing. According to a recent survey by CodingGame, almost 50 percent of tech recruiter respondents say they’re struggling to fill open roles. As hiring in tech becomes more difficult, the need for diligence in the hiring process – especially in AI – is extremely important.

[ Also read Artificial intelligence (AI): 7 roles to prioritize now. ]

Hiring in AI is like putting together the right football team. Don’t sign all quarterbacks or all linebackers – or, in AI terms, don’t hire only generalist data scientists. Pay attention to a candidate’s specialized skillsets and experience and match these to your business needs. For example, deep expertise in modeling is critical for thorough research and solution development, while data engineering skills are essential to execute the solution.

3. Not providing the proper care for data

Every AI-related business goal begins with data – it is the fuel that enables AI engines to run. One of the biggest mistakes companies make is not taking care of their data. This begins with the misconception that data is solely the responsibility of the IT department. Before data is captured and input into AI systems, business subject matter experts and data scientists should be looped in, and executives should provide oversight to ensure the right data is being captured and maintained appropriately. It’s important for non-IT personnel to realize they not only benefit from good data in yielding quality AI recommendations, but their expertise is a critical input to the AI system. Make sure that all teams have a shared sense of responsibility for curating, vetting, and maintaining data.

Data management procedures are also a key component of data care. Processes for data management and governance need to evolve to handle the increased volume, velocity, and variety of data while ensuring compliance with government and corporate regulations. This includes data collection, data storage, and protocols for accountability and regular assessment.

4. Not maintaining AI effectiveness

AI requires intervention to sustain it as an effective solution over time. For example, if AI is malfunctioning or if business objectives change, AI processes need to change. Doing nothing or not implementing adequate intervention could result in AI recommendations that hinder or act contrary to business objectives.

Consider AI-based pricing systems, for example. AI effectiveness will degrade if the AI system is not set up to adjust to market changes. In other words, as the source data changes in nature, the AI system must adapt to match the current market.

Don't wait for AI to malfunction before you intervene – by then your margins may have suffered.

One way to measure AI effectiveness is through sales team performance. Effective sales teams want to be compliant with pricing recommendations that help them achieve their goals and should therefore be open to having their performance measured by how well they adopt AI that drives value. Common pricing-related KPIs include profit margin and revenue. Tracking KPIs also helps illuminate which sales teams or team members are adopting AI. If the recommendations are not fueling KPI achievement, it may be time for intervention.

Intervention should be scalable and repeatable through highly automated processes to minimize the burden on AI users. Intervention should include two components: reviewing inputs to the AI system and ensuring its output is as expected. Each of these practices should be a regular occurrence throughout the year. Don’t wait for AI to malfunction before you intervene – by then your margins may have suffered.

5. Not accounting for potential biases in available data

Like humans, AI and its derived outputs can be biased when exposed to a limited or nonrepresentative dataset. (This is true for both AI models and descriptive analytics.) The presence and subsequent consideration of biases often have little to do with the intentions behind the AI. Therefore, when the consequences of those biases happen, the blame is often with the gatekeepers of the AI, not the AI itself.

As mentioned above, data and intervention are important components of successful AI use. This is especially true when biases are uncovered in AI. However, preventing a problem is always better than having to fix it. If possible, avoid data that can inadvertently be biased against race, gender, class, etc. For example, modeling that is based directly on consumers’ geography and income may produce biased output.

To prevent or fix biases, explainable AI can be a good solution. Explainable AI methods can identify the key factors that are driving the AI model’s predictions or recommendations and make intervention a much easier process. Once explainable AI methods show how AI is arriving at biased outputs, intervention must be swift, repeatable, and scalable to avoid further negative consequences for your business and your consumers.

Help AI help you

Using AI correctly can be an indispensable asset for your business. From an increased return on investment to fulfilled business goals to satisfied customers, the effects can be significant. Being intentional about AI use and developing guidelines to avoid common mistakes will allow for simultaneous growth of your AI implementations and business successes.

[ How does AI connect to hybrid cloud strategy? Get the free eBook, Hybrid Cloud Strategy for Dummies ]

Justin Silver, PhD, is a manager of data science and AI strategist at PROS. He specializes in the application of data science to enable pricing and sales excellence. Dr. Silver’s innovative contributions to the PROS solutions suite have helped customers to achieve substantial ROI through a scientific approach to commerce.