AI in the enterprise: 8 myths, debunked

Enough with romantic robots: Let's dispel eight common misconceptions about applying AI in the business world now
885 readers like this.

Misconceptions arise about any emerging technology, but they seem particularly pronounced when it comes to artificial intelligence. Perhaps that’s because the very scope of AI’s potential impacts has accrued a certain mythical status of its own.

“AI is commonly misunderstood because there’s a vast universe for us to explore, and exploring the unknown can be confusing and intimidating,” says Bill Brock, VP of engineering at Very.

This becomes a particular problem for IT leaders trying to size up the actual applications of AI in their organizations.

“While AI in the enterprise is becoming more common, there are still a fair amount of misconceptions about its use cases and how it can improve or update systems of the past,” Brock says. “While we can romanticize the notion about robots becoming our colleagues, it’s necessary to understand how these different kinds of technologies can help enhance our systems and create a more efficient environment.”

Indeed, “romanticizing technology” is the stuff of pie-in-the-sky sales pitches, not the bottom-line results that strategic CIOs are achieving with AI.

And achieving, they are: A new Harvard Business Review Analytic Services report, "An Executive's guide to real-world AI," details how tech executives are already logging AI wins at companies including Adobe, 7-Eleven, Bayer Crop Science, Caesar's Entertainment, Capital One, Discover, Equifax, and Raytheon. (Download the full report.)

Moreover, romanticizing reality often produces the kinds of myths and misconceptions that stand in the way of actionable goals. So we asked Brock and other experts to identify the common myths about AI in the enterprise today to help IT leaders and other business people separate fact from fiction.

[ How does RPA fit in with AI and ML? Read also: How to explain Robotic Process Automation (RPA) in plain English. ]

8 AI Myths

1. "AI and machine learning are the same thing."

They’re not, and understanding the difference between the two can be crucial for a variety of reasons, such as avoiding snake-oil solutions, and setting up your AI initiatives for tangible success. Machine learning is better thought of as a specific sub-discipline of AI.

“In many conversations, I find that little distinction is drawn between these terms,” says Michael McCourt, research scientist at SigOpt. “That can be problematic. If someone in power at a company believes that ‘build me a classification model’ is equivalent to ‘use our data to solidify our decision-making process,’ then the important step of appropriately interpreting the structure and implications of the model as it will be applied is lost. Failing to recognize this myth will lead companies to underinvest in the AI team and perhaps not sufficiently involve people with stronger business context in the development and interpretation of these models, which can doom an AI team to failure.”

[ Get a crash course: AI vs. machine learning: What’s the difference? ]

2. "AI and automation are the same thing."

AI and machine learning aren’t the only two terms that get confused. Similar to machine learning, AI and automation tend to get mixed up because they do have a relationship with one another – an important one.

“As people become more familiar with AI, they learn that artificial intelligence is a machine capable of thinking – or at least making clever decisions based upon a series of pre-defined models and algorithms – and automation is simply completing a task without human interference,” Brock says. “Automation does not necessarily imply AI, but some of AI’s most impactful use cases enhance automation in a dramatic way.”

3. "More training data leads to better AI outcomes."

It’s an increasingly common (and increasingly problematic) misunderstanding that the only real prerequisite to AI success is lots of data.

There are jobs in AI and machine learning teams now that are focused almost entirely on curating and cleaning data.

“It’s not the quantity of training data that matters, it’s the quality,” says Rick McFarland, chief data office at LexisNexis Legal and Professional. “Large volumes of poorly or inconsistently labeled training data don’t bring you closer to an accurate outcome. They can actually trick modelers by creating “precise” results since the formula for variance is inversely dependent on sample size. In a nutshell, you get precisely inaccurate results.”

We’ll go out on a modest limb here and predict that, in a nutshell, one of the most common lessons learned from early AI failures will be: We just threw a lot of data at it and assumed it would work. In the early phases, bigger is not necessarily better.

“This cannot be stressed enough - quality data is integral to an effective algorithm,” says Brock from Very. “Often people mistake the capabilities of AI and how it needs to be set up for success. Bad data produces bad results, no matter what problem you’re looking to solve.”

Brock adds that there are jobs in AI and machine learning teams now that are focused almost entirely on curating and cleaning data. Even if you’re not at that point yet, always prioritize quality over quantity.

“The best practices today focus on creating better training datasets using structured methods and tests for biases,” McFarland says. “The results being that modelers actually can use smaller data sets that are derived at cheaper costs.”

[ Read the related article by Eric Brown: Getting started with AI in 2019: Prioritize data management. ]

4. "AI will deliver value from the moment it is deployed."

That’s not to say “more data” is intrinsically a bad thing; in fact, it becomes increasingly necessary over time. But time is the key word: You’ll need it to bring quantity and quality into sync. In general, no one should expect instant ROI on their AI initiatives, yet that’s sometimes how the technology is portrayed: Just turn it on and watch the magic happen.

“AI and ML engines need to be trained, and require vast amounts of data to learn. Some of the data can be seeded,” says Javed Sikander, CTO, NetEnrich. “However, the bulk of the data comes from the domain where it is deployed, and where the AI/ML systems focus their learning efforts. So, it is not reasonable to expect the AI/ML systems to produce recommendations and insights from Day 1. Processes need to be put in place, and resources need to be allocated at various environments for this learning to happen gradually. Only then magic happens.”

5. "AI and machine learning are basically just software development."

Diego Oppenheimer, CEO of Algorithmia, sees organizations approaching AI and ML in the same manner as they do any other software development.

“It’s a myth that AI/ML development is just software development,” Oppenheimer says. “In truth, the majority of ML projects fail, and a big reason is that ML workloads behave very differently than traditional software and require a different set of tools, infrastructure and processes to deploy and manage at scale.”

Oppenheimer points to issues like:

  • Heterogeneity: There’s a large, growing menu of languages and frameworks to navigate. “Data science is all about choice, and it’s going to get bigger before it gets smaller,” Oppenheimer says.
  • Composability: AI and ML commonly involves simultaneous pipelines for multiple components, each potentially built by a different team and in a different language. Oppenheimer gives an example a system that requires one model to select target images, another to extract text from those images, a third to do sentiment analysis based on those images, and a fourth to recommend an action based on that sentiment. While traditional application development might be headed in this direction with things like microservices, it’s still relatively monolithic compared to what’s required of AI and ML, according to Oppenheimer. This will require an adjustment for some teams.
  • Development process: “In traditional software development, the output is code that executes in a controlled environment,” Oppenheimer says. “In machine learning, the output is an evolving ecosystem – inference-made by the interaction of your code with live data. That requires a very different, more iterative cycle.”
  • Hardware / infrastructure: “[It’s] still evolving: CPUs, TPUs, GPUs, edge compute, and any number [of new choices] to come – each with different strengths and challenges.
  • Performance metrics: “ML-based performance metrics are multidimensional and very context-sensitive,” Oppenheimer notes. That means there’s no standard set of metrics that work for everyone or even many. “A retail fraud detection model might be good enough at 75 percent accuracy if it errs on the side of false positives, as long as it returns results fast enough to not impact the checkout process,” he says. “A fraud detection model used by forensic accountants might trade performance for greater accuracy.”
Kevin Casey writes about technology and business for a variety of publications. He won an Azbee Award, given by the American Society of Business Publication Editors, for his InformationWeek.com story, "Are You Too Old For IT?" He's a former community choice honoree in the Small Business Influencer Awards.

Comments

Completely agree that AI is not the same as automation and that it can enhance automation, but it's not the same.