Want to boost your teams' productivity on digital transformation work? Consider shifting to a persistent, contiguous, and proximate model. Here's how.
4 artificial intelligence trends to watch
Is this the year AI decision-making becomes more transparent?
However much your IT operation is using artificial intelligence today, expect to be doing more with it in 2018. Even if you have never dabbled in AI projects, this may be the year talk turns into action, says David Schatsky, managing director at Deloitte. “The number of companies doing something with AI is on track to rise,” he says.
Check out his AI predictions for the coming year:
1. Expect more enterprise AI pilot projects
Many of today’s off-the-shelf applications and platforms that companies already routinely use incorporate AI. “But besides that, a growing number of companies are experimenting with machine learning or natural language processing to solve particular problems or help understand their data, or automate internal processes, or improve their own products and services,” Schatsky says.
[ What IT jobs will be hot in the AI age? See our related article, 8 emerging AI jobs for IT pros. ]
“Beyond that, the intensity with which companies are working with AI will rise," he says. "Companies that are early adopters already mostly have five or fewer projects underway, but we think that number will rise to having 10 or more pilots underway.” One reason for this prediction, he says, is that AI technologies are getting better and easier to use.
2. AI will help with data science talent crunch
Talent is a huge problem in data science, where most large companies are struggling to hire the data scientists they need. AI can take up some of the load, Schatsky says. “The practice of data science is increasingly automatable with tools offered both by startups and large, established technology vendors,” he says. A lot of data science work is repetitive and tedious, and ripe for automation, he explains. “Data scientists aren’t going away, but they’re going to get much more productive. So a company that can only do a few data science projects without automation will be able to do much more with automation, even if it can’t hire any more data scientists.”
3. Synthetic data models will ease bottlenecks
Before you can train a machine learning model, you have to get the data to train it on, Schatsky notes. That’s not always easy. “That’s often a business bottleneck, not a production bottleneck,” he says. In some cases you can’t get the data because of regulations governing things like health records and financial information.
Synthetic data models can take a smaller set of data and use it to generate the larger set that may be needed, he says. “If you used to need 10,000 data points to train a model but could only get 2,000, you can now generate the missing 8,000 and go ahead and train your model.”
4. AI decision-making will become more transparent
One of the business problems with AI is that it often operates as a black box. That is, once you train a model, it will spit out answers that you can’t necessarily explain. “Machine learning can automatically discover patterns in data that a human can’t see because it’s too much data or too complex,” Schatsky says. “Having discovered these patterns, it can make predictions about new data it hasn’t seen.”
The problem is that sometimes you really do need to know the reasons behind an AI finding or prediction. “You feed in a medical image and the model says, based on the data you’ve given me, there’s a 90 percent chance that there’s a tumor in this image,” Schatsky says. “You say, ‘Why do you think so?’ and the model says, ‘I don’t know, that’s what the data would suggest.’”
If you follow that data, you’re going to have to do exploratory surgery on a patient, Schatsky says. That’s a tough call to make when you can’t explain why. “There are a lot of situations where even though the model produces very accurate results, if it can’t explain how it got there, nobody wants to trust it.”
There are also situations where because of regulations, you literally can’t use data that you can’t explain. “If a bank declines a loan application, it needs to be able to explain why,” Schatsky says. “That’s a regulation, at least in the U.S. Traditionally, a human underwriter makes that call. A machine learning model could be more accurate, but if it can’t explain its answer, it can’t be used.”
Most algorithms were not designed to explain their reasoning. “So researchers are finding clever ways to get AI to spill its secrets and explain what variables make it more likely that this patient has a tumor,” he says. “Once they do that, a human can look at the answers and see why it came to that conclusion.”
That means AI findings and decisions can be used in many areas where they can’t be today, he says. “That will make these models more trustworthy and more usable in the business world.”
Want more wisdom like this, IT leaders? Sign up for our weekly email newsletter.