Is the overall energy around open source in your organization translating to individual or team results? If not, consider revisiting your open source strategy.
AI vs. automation: 6 ways to spot fake AI
Is that really artificial intelligence - or just automation, being described as AI? Let's explore the difference - and six possible signs of AI washing
Seems like everybody’s an artificial intelligence (AI) provider these days. The global enterprise AI market, which garnered $4.68 billion in revenues in 2018, is expected to generate $53.06 billion by 2026, according to a report by Allied Market Research. So it’s little wonder that seemingly every enterprise technology vendor wants to grab a piece of the AI pie.
But not all AI solutions are what they seem. “AI washing” – the practice of touting a technology solution as AI when it may be no more than simple automation or a new marketing spin for an existing application – is a real phenomenon, industry analysts say. “Very few, in my opinion, are using strong AI,” says Wayne Butterfield, director of cognitive automation and innovation at ISG. “What we need to be mindful of is that AI covers over 200 different disciplines, so it’s not uncommon to be using a branch of AI in a tool. Some advanced analytics may now be classed as AI, even a small amount of machine learning. This means that you can be creative as a vendor.”
The explosive growth in the velocity, volume, and variety of data being produced by today’s enterprise, along with increasing computing power and the accessibility of new tools, means more IT organizations are considering or deploying AI solutions to solve business problems. However, as always, it’s important to push back on hype and dig into what a new technology actually has to offer before signing on the dotted line.
[ Get our quick-scan primer on 10 key artificial intelligence terms for IT and business leaders: Cheat sheet: AI glossary. ]
What’s the difference between AI and automation?
There is a clear difference between AI (in its various forms, like machine learning, deep learning, or natural language processing) and non-AI automation - both in how they work and what types of outcomes they can produce.
“In general, AI relies on models and algorithms to autonomously find patterns in the data (‘inputs’) to provide insights, predictions, and prescriptions (‘outputs’) that could have significant business impact,” says JP Baritugo, director at business transformation and outsourcing consultancy Pace Harmon.
“In contrast, automation is traditionally used for processes where the input data is structured, the rules are defined with manageable exceptions, and interactions with multiple systems are required. Automation use cases are predominantly task-oriented versus a true end-to-end process view.”
An automation solution, for example, might be used to transfer data from emails into an ERP system, execute bulk data uploads and changes, or provision IT assets and resources for a new employee.
AI, by contrast, has broader applications in the business. It can cluster like data to drive business insights, classify data to determine if a customer is a churn risk, or provide predictions or recommendations such as the “next best action” based on a customer’s profile or behavior.
[ How does this work - and what's next? Read also: How big data and AI work together. ]
“Generally speaking, automation is applied to rote processes, and AI is ‘smart’ because the applications are trained to improve,” Baritugo says.
Telling the difference can still be hard. “The reality is that it is somewhat difficult to identify even for the relatively well-trained eye, since many of the systems present as black boxes that take in a certain input and present an output, regardless of whether AI is involved or not,” says Anil Vijayan, vice president at Everest Group. Making the environment even more confusing, some of today’s robotic process automation (RPA) providers now integrate cognitive capabilities into their offerings.
[ What’s next in AI? Read 10 AI trends to watch in 2020. ]
6 signs something might not really be AI:
“A good rule of thumb to pressure-test if a solution is truly using AI is to ask how it’s typically deployed and used,” advises Baritugo. Here are some red flags that may indicate you’re dealing with an AI wanna-be:
1. The product requires minimal data for training
If you’re told that you don’t need much normalized data for model training or the data requirement is downplayed, take note. “AI-based solutions generally require a fair bit of data to perform at a desired level of accuracy,” says Vijayan. “It would be useful to consider the sources of data available for the system to learn as well.”
Basic machine learning models require thousands of examples to train on; deep learning demands hundreds of thousands or millions. So a lack of data requirements is a warning sign to beware.
2. The "AI" needs business rules in order to work
“If this, then that” is a telltale sign of a non-AI automation solution: “This is the opposite of how AI should work,” Baritugo says.
While AI might be used for automation, rules-based tasks are usually executed by “dumber” systems like RPA, says Vijayan
3. There is a notable lack of case studies
Many companies are touting their AI chops but may be early on in integrating cognitive capabilities into their tools. Use cases are one of the best ways to find out if a product actually leverages AI today. Look for details about data, training models, and outcomes that go beyond increased efficiency or cost savings.
4. The solution does not get better over time
“When vetting a solution for AI-capabilities, it would be useful to ask how the system learns,” Vijayan says. The more data an AI model is exposed to, the better it should perform. If the provider can’t acceptably demonstrate examples of how the solution is deployed in other customers and how the tool has learned and improved over time, Baritugo explains, it’s probably not an AI solution.
5. The vendor's staff is light on AI talent
“If a provider or vendor claims to be delivering AI-based solutions without relying on third-party vendors, it would be instructive to look at their talent model,” says Vijayan. “Developing complex AI-based solutions [requires] ML engineers, data scientists with specialized ML skill sets, and so on.”
6. There's no clarity about how the AI works
The provider should be able to articulate, at least at a high level, the models or algorithms used to power their solution, says Baritugo. “‘What form of AI are you using?’ and ‘What is it doing?’ are the questions I usually start with,” says Butterfield. “When the answer isn’t clear, or the usage isn’t actually useful, it’s often a sign of AI washing.”
[ How can automation free up more staff time for innovation? Get the free eBook: Managing IT with Automation. ]