Looking to land a new IT job? Move your resume to the top of the list by avoiding these three common pitfalls
3 AI misconceptions IT leaders must dispel
To tap the transformative power of AI, CIOs need to help people understand its potential – and limitations
Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain, says Anthony Scriffignano, Ph.D., senior vice president and chief data scientist at Dun & Bradstreet.
To properly harness the power of AI, he says, we need to let go of the wrong assumptions we’ve made about it. Here are three he believes are the biggest.
1. AI is comparable to human intelligence
You could say that humans and AI don’t think in the same way, but that would be misleading because AI doesn’t think in our sense of the word at all, Scriffignano explains.
“In many ways, it’s not really intelligence. It’s regressive.” In other words, he says, “It looks at what has happened and tries to use that information to accomplish some sort of goal using attributes that hopefully are stable and true.” This is why AI runs into trouble when it encounters something completely new, he explains.
To illustrate this point, he often asks audience members to name five things that are red. They might call out a cherry, a tomato, a fire engine, lipstick, and a clown’s nose. “I’ll say, ‘You never put those items together in the same thought before. So you’ve just written a program and tested it.’” This a good example of a regressive task.
But what if he asked for five examples of, say, alien baby names? A human would know how to respond appropriately to that question, but AI would not. “There are different ways to interpret that question,” he says. “We apply content, context, and experience, and then there are soft things like intuition and skill and tone. We haven’t done a good job yet with those in AI.”
[ For more insights on artificial intelligence, see our related article, 5 TED Talks on AI to watch. ]
2. AI will overtake human intelligence
In a recent worldwide poll, scientists predicted that AI would become smarter than humans sometime in the next 30 to 50 years. Scriffignano has a different prediction: “AI never will overtake human intelligence.” What about the scientists’ predictions? “The reason that they’re right – and I’m right – is that AI can overtake us at performing specific tasks.”
For example, earlier this year AlphaGo, created by Alphabet Inc. (Google’s parent company) beat the world’s top-ranked Go player without a handicap. Go has proved a challenge for computers because it offers too many possible combinations for the kind of brute-force calculations that allow computers to beat the world’s best chess players. Winning at Go requires pattern recognition, something humans have traditionally done better than algorithms.
AlphaGo’s win was considered a victory for AI – and rightly so. Still, Scriffignano points out, “It doesn’t do a very good job of composing music, and it can’t chop wood, and it can’t argue about why it doesn’t work. Self-reference is a big part of intelligence.”
3. AI is more trustworthy than human intelligence
Self-driving cars have logged more total miles without fatalities than human drivers, but one did cause a death last year when it mistook the side of a semi truck for the sky. “Having blinders on about risk with AI is a big mistake,” Scriffignano notes. “For example, AI brings in all the data it has and considers it all true, but maybe some of it was true and isn’t true anymore. There are a lot of risks associated with taking your hands off the wheel, or letting AI drive a process and taking people out of the loop.”
That doesn’t mean that every decision made by AI should be overseen or double-checked by a human. But it does mean that IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,” Scriffignano says.