Artificial Intelligence (AI) strategy: 8 counterintuitive tips

To successfully implement AI into your business processes, rethink traditional IT approaches and some common wisdom. Consider these tips
299 readers like this.
AI artificial intelligence

Artificial intelligence (AI) has officially entered the enterprise, quickly evolving from a pipe dream to reality. Indeed, the majority of organizations (85 percent) are either adopting or evaluating AI, according to a recent O’Reilly survey, with more than half using AI in production or for analysis.

AI doesn’t fit neatly into the same processes and approaches that IT has used in the past.

These efforts, while rapidly expanding, are still in their early stages. And the growing pains are already becoming evident. “Companies need to do more to put their AI efforts on solid ground,” O’Reilly analysts point out. “Whether it’s controlling for common risk factors – bias in model development, missing or poorly conditioned data, the tendency of models to degrade in production – or instantiating formal processes to promote data governance, adopters will have their work cut out for them as they work to establish reliable AI production lines.”

AI doesn’t fit neatly into the same processes and approaches that the IT organization has used in the past. The best practices and common-sense approaches that apply to evaluating, testing, implementing, and scaling non-learning systems may not always translate. In some cases, they may backfire.

[ Do you understand the main types of AI? Read also: 5 artificial intelligence (AI) types, defined. ]

8 AI strategy tips that buck common wisdom

Here are eight counterintuitive tips that will help your AI efforts going forward.

1. Slow down

In some organizations, there is a breathless rush toward the AI-ification of the enterprise: That can be dangerous if unchecked. Modern AI has high IQ but low EQ, says Dr. Jerry A. Smith, vice president of data sciences, machine learning, and AI at Cognizant. True intelligence requires both. “If you get data and use AI to analyze it and learn from it without emotion and do it at scale, you’re basically turning a psychopath loose in [the] system.”

IT leaders should take their time to make sure they have human discussions very early on about what they’re trying to achieve with AI, Smith says.

Execs often want AI to save them, he adds. “But in the end, if they don’t set up the right framework and strategy, it actually harms them.”

[ Get our quick-scan primer on 10 key artificial intelligence terms for IT and business leaders: Cheat sheet: AI glossary. ]

2. Focus on skills and culture before tools

“Technology is often where companies start if they are trying to innovate, and that’s not surprising, ” says Shawn Rogers, vice president of analytic strategy at TIBCO. “But turning your back on the people and culture aspect will certainly doom you to failure.”

Rogers adds, “New skills are required to drive success in AI, along with a culture that fosters the adoption and action to get to the value of AL and machine learning (ML) technology. A balanced strategy is required for success.”

3. Plan for iteration

Enterprises looking to get started with AI need to start with the use case in mind. However, most AI and ML use cases evolve over time in a very iterative fashion, notes Ashish Thusoo, CEO and co-founder of the open data lake company Qubole. “It is critical that organizations invest in the capability to perform continuous data engineering and provide both SQL and programmatic access to train and deploy models,” says Thusoo, who previously co-founded Apache Hive and built the Facebook data platform.

4. DevOps is not enough

Most innovative IT shops are already on the DevOps train. That’s mandatory for AI adoption, but not sufficient. Organizations need to add MLOps, says George Mathew, client partner for technology services at Fractal Analytics. “This integration needs to be planned early on in the application lifecycle,” he says, “and followed throughout the subsequent phases.”

For example, organizations need to consider the retraining of AI models that will happen in production. “This means that additional pipelines have to be built to compare the insights (such as forecasts) generated from the AI models against the actual numbers received from the field a few weeks or months later,” Mathew explains.

"Additional pipelines have to be built to compare the insights generated from the AI models against the actual numbers received from the field weeks or months later."

5. Brace for scale

Early forays into AI tend to leverage a few models using a defined group of data. However, those efforts can quickly expand into something far less manageable. “As success gains velocity, managing hundreds of models in production and multiple authoring environments for growing data science teams creates new challenges to keep up with the growing demands,” says TIBCO’s Rogers.

[ What’s next? Read also: 10 AI trends to watch in 2020 and How big data and AI work together. ]

6. Look for bias, but not where you think

“Understanding the relationship between inputs and outputs is the easy part of AI,” says Smith of Cognizant. “A lot of people put pressure on AI for biases. They want to make sure it’s not biased." For example, you don't want biased AI algorithms making loan decisions.

Meanwhile, however, human underwriters may themselves be biased. “You need to get your hands around the human intelligence component of this and make sure the person building AI systems isn’t biased in the modeling,” Smith says. Mitigating bias can’t start or stop at the data level.

[ How can you spot AI bias? Read also: AI bias: 9 questions leaders should ask. ]

7. Don't put the data scientists in charge

The most effective AI systems are built to augment humans. “If you want a system to support human beings, it has to be human-centric,” Smith says. “You need psychologists (people who understand the behaviors of customers) and sociologists (people who know how your customers interact in society). AI is too important to be left to data scientists.”

"If you want a system to support human beings, it has to be human-centric."

8. Prepare to explain yourself

Explainable AI (XAI) – techniques that can enable humans to understand, trust, and manage AI – is becoming more mainstream. As a result, says Mathew of Fractal Analytics, some IT organizations will be on the receiving end of regulatory audits asking for details of the AI-model training runs, such as what data sets were used, how the algorithm was evaluated, and what model metrics are generated at each stage.

“These elements need to be collected and stored throughout the time period that the models run in production – and beyond, for some use cases,” Mathew says. “The solution architect needs to prepare the architecture to address these requirements, the project leader has to include these steps and deliverables in the project plan, and the data scientists and engineers building the application have to work with this framework.”

Planning for that work at the start of the project and providing necessary support to achieve that over the system development lifecycle has become a critical success factor for AI applications.

[ How can automation free up more staff time for innovation? Get the free eBook: Managing IT with Automation. ] 

Stephanie Overby is an award-winning reporter and editor with more than twenty years of professional journalism experience. For the last decade, her work has focused on the intersection of business and technology. She lives in Boston, Mass.