When bad actors have AI tools: Rethinking security tactics

How can you prepare for a reality where bad actors use Artificial Intelligence technology to cause disruption? The good news is AI can help you bolster your security against such threats, too
207 readers like this.

Cloud-stored data is suddenly encrypted, followed by a note asking for ransom or threatening public embarrassment. Corporate email addresses become conduits for malicious malware and links. An organization’s core business platform abruptly goes offline, disrupting vital communications and services for hours.

We’ve learned to recognize the familiar signs of a cyberattack, thanks to the growing array of well-publicized incidents when threat actors from nation-states or criminal enterprises breach our digital networks. Artificial Intelligence is changing this picture.

[ Read also: 5 approaches to security automation and How to automate compliance and security with Kubernetes: 3 ways. ]

With AI, organizations can program machines to perform tasks that would normally require human intelligence. Examples include self-driving trucks, computer programs that develop drug therapies, and software that writes news articles and composes music. Machine learning (ML) is an application of AI that uses algorithms to teach computers to learn and adapt to new data.

AI and ML represent a revolutionary new way of harnessing technology – and an unprecedented opportunity for threat actors to sow even more disruption.

What do these emerging adversarial AI/ML threats look like? How can we take the appropriate measures to protect ourselves, our data, and society as a whole?

Myriad opportunities for manipulation

Step one in cybersecurity is to think like the enemy. What could you do as a threat actor with adversarial AI/ML? The possibilities are many, with the potential impact extending beyond cyberspace:

Step one in cybersecurity is to think like the enemy. What could you do as a threat actor with adversarial AI/ML?

You could manipulate what a device is trained to see – for instance, corrupting training imagery so that a driving robot interprets a stop sign as “55 mph.” Because intelligent machines lack the ability to understand context, the driving robot in this case would just keep driving – over obstacles or into a brick wall if these things stood in its way. Closer to home, an adversarial AI/ML attack can fool your computer’s anti-virus software into allowing malware to run.

You could manipulate what humans see, like a phone number that looks like it’s from your area code. “Deepfakes” are a sophisticated – and frightening – example of this. Manufactured videos of politicians and celebrities, nearly indistinguishable from the real thing, have been shared over social media among millions of people before being identified as fake.

Furthermore, you can manipulate what an AI application does, like Twitter users did with Microsoft’s AI chatbot Tay. In less than a day, they trained the chatbot to spew misogynistic and racist remarks.

Once a machine learning application is live, you can tamper with its algorithms – for instance, directing an application for automated email responses to instead spit out sensitive information like credit card numbers. If you’re with a cybercriminal organization, this is valuable data ripe for exploitation.

You could even alter the course of geopolitical events. Retaliation for cyberattacks has already been moving into the physical world, as we saw with the 2016 hacking of Ukraine’s power grid. Adversarial AI ups the ante.

[ Check out our primer on 10 key artificial intelligence terms for IT and business leaders: Cheat sheet: AI glossary. ]

Getting ahead of bad actors

Fortunately, as adversarial AL/ML tactics evolve, so are cybersecurity measures against them. One tactic is training an algorithm to “think more like a human.” AI research and deployment company Open AI suggests explicitly training algorithms against adversarial attacks, training multiple defense models, and training AI models to output probabilities rather than hard decisions, which makes it more difficult for an adversary to exploit the model.

Training can also be used in threat detection – for example, training computers to detect deepfake videos by feeding them examples of deepfakes compared with “real” videos.

IT teams can also achieve “an ounce of prevention” through baking security into their AI/ML applications from the beginning. When building models, keep in mind how adversaries may try to cause damage. A variety of resources, like IBM’s Adversarial Robustness Toolbox, have emerged to help IT teams evaluate ML models and create more robust and secure AI applications.

Where should organizations start their efforts? Identify the easiest attack vector and try to bake it directly into your AI/ML pipeline. By tackling concrete problems with bespoke solutions, you can mitigate threats in the short term while building the understanding and depth needed to track long-term solutions.

Attackers armed with AI pose a formidable threat. Bad actors are constantly looking at loopholes and ways to exploit them, and with the right AI system, they can manipulate systems in new, insidious ways and easily perform functions at a scale unachievable by humans. Fortunately, AI is part of the cybersecurity solution as well, powering complex models for detecting malicious behavior, sophisticated threats, and evolving trends – and conducting this analysis far faster than any team of humans could.

[ Get the eBook: Top considerations for building a production-ready AI/ML environment. ]

Edward Raff, PhD is a Chief Scientist at Booz Allen Hamilton, where he leads the internal machine learning (ML) research team and supports clients in their ML work. Dr. Raff's research includes malware detection, adversarial ML, biometrics, high performance computing, and reproducibility in ML. Dr.