Skip to main content Skip to secondary navigation
Main content start

Can AI-assisted "super agents" protect us from cybersecurity breaches?

Graphic depicting a human head shape with cog wheels
razum/Shutterstock.com

The hackers are winning…for now. Capital One recently joined the ranks of companies victimized by data thieves. These large-scale hacks are so common that while the news should spark consumer outrage, now people just roll their eyes and check on their credit freezes. Clearly, the current security protocols are failing to protect us. 

What can we do about it?

Lieutenant Colonel Isaac Faber (PhD '19) thinks artificial intelligence might be our most important next-generation weapon in the war against hackers. Faber’s PhD dissertation with Professor Elisabeth Paté-Cornell proposes the creation of an early warning system for cybersecurity using "super agents."

"A super agent is a human and an AI that are both responsible for the same decisions," said Faber. A dual decision-making model leverages the strengths of both participants: computers excel at speed, and humans excel at making complex decisions. The parties (making up the super agent) could tag the other teammate when necessary.

Part of the problem for companies like Capital One and Equifax is that data servers are incredibly busy places with almost continuous data sharing requests. According to Faber, "It's really tough for humans to monitor. There's a lot of activity, so it's easy for hackers to hide in the noise." Even large hacks often go unnoticed until a third party alerts a company or government agency to the sale of their precious data on the black market.

"Most cybersecurity involves reacting to attacks and trying to minimize the damage," said Faber. But data heists always start with some kind of reconnaissance—the hacking equivalent of someone casing your house before they break in—and that's an opportunity. In Faber's model, would-be burglars would be identified and locked out before they carried off your flat screen.

In his research, he tested if he could train the AI half of the super agent to recognize unfriendly data mapping (possible reconnaissance by hackers). He set up "honeypots," which are public facing servers with sensors to collect data about their interactions in commercial clouds. Basically, hacker bait.

After eight months and more than 600,000 interactions, he pulled the data and then cross-referenced it against publicly available blacklists of possible hackers compiled by cybersecurity vendors.

What he found was that his AI could be trained using the honeypot data to identify threats in many of the cases (about 86 percent).

But what about the threats the AI couldn't identify? After all, companies have to rebuff attacks thousands of times, but hackers just have to break in once.

This is the beauty of the super agent. An AI can make good decisions in high-confidence situations. In low-confidence situations, or completely new scenarios, the AI can tag a human for assistance. "The less confidence the computer has in a decision, the longer the human stays involved," said Faber. The human trusts the AI half because it only alerts the human in situations that are ambiguous.

Graphic depicting an AI component in a decision making process
"Robot and Human Constitute a Super-Agent Decision Maker," from Cyber Risk Management: AI-generated Warnings of Threats, I. Faber

Right now, very few companies are employing machine learning or AI in their data security operations, but perhaps that will change.

Faber's research outlines a model where true real-time risk assessments can aid cybersecurity decisions, as well as an overall data security policy. "The AI should give you the risks in the form of a set of warnings recommendations in a way that's easily shared with the humans," said Faber. The humans will get the level of information they want to create better security protocols. And then moving forward, the AI will learn the preferences of the humans in charge, ultimately requiring less human intervention.

The philosophy that an AI should adapt to the individual preferences of the user is at the heart of Faber's research. "There hasn't been enough work about how AI systems learn and incorporate preferences from the human. It's not a body of research… but it could be."

He wonders why, for instance, autonomous vehicles all drive the same. "Cars should drive according to your preferences versus the preferences of some engineer you've never met. I think that's the evolution of how we'll see the AI domain going—for cyber, but also vehicles, and healthcare. There's lots of places where you want the AI behaving according to your preferences and not necessarily according to just raw data that it's pulling."

Faber will have a lot of time to work on these issues around AI. With the completion of his MS&E PhD, he returned to the military as chief data scientist of the Army Artificial Intelligence Task Force. He'll work on projects that incorporate AI into military systems at the Carnegie Mellon National Robotics and Engineering Center (NREC).

Read the full paper here