The Defense Advanced Research Projects Agency (DARPA) recently unveiled the new GARD program designed to counteract adversaries who try to “trick” America’s AI-enabled defense systems.
Short for ‘Guaranteeing AI Robustness against Deception,’ the set of technologies and strategies should help America’s military forces improve AI-enhanced systems like advanced targeting systems. That’s because these systems can, in some cases, be deceived by low-tech approaches that enemies use to trick into thinking their forces or vehicles are allies or even make those same systems believe a friendly vehicle is actually an enemy target.
“That is a program that’s focused on building defenses against adversarial attacks on AI systems,” explained Matt Turek, the deputy director for DARPA’s Information Innovation Office, during a virtual event hosted by the Center for Strategic and International Studies last week.
Although the GARD program was originally revealed in January 2022, this latest announcement shows that the program’s efforts are ready to be implemented. That news comes as a welcome relief to those who develop AI-based systems designed to give our military forces an edge.
Due to their complex nature, AI systems can also be vulnerable to attacks and deception. In fact, the program’s leader says AI system vulnerabilities are in a whole other class than vulnerabilities in traditional software-driven systems.
“AI systems are made out of software, obviously, right, so they inherit all the cyber vulnerabilities — and those are an important class of vulnerabilities — but [that’s] not what I’m talking about here,” Turek told event attendees. Instead, the DARPA official said the relatively new and overly complex nature of AI systems has created a whole new arena of previously unseen vulnerabilities.
“There are sort of unique classes of vulnerabilities for AI or autonomous systems, where you can do things like insert noise patterns into sensor data that might cause an AI system to misclassify,” Turek said. “So you can essentially, by adding noise to an image or a sensor, perhaps break a downstream machine learning algorithm. You can also, with knowledge of that algorithm, sometimes create physically realizable attacks.”
One example of these vulnerabilities involves AI targeting systems that use visual and sensor information to determine if a vehicle is a friend or a foe. In such a case, Turek says that simply adding a well-placed sticker to a friendly bus could trick the AI software into thinking it is an enemy tank and marking it for attack. The same situation exists in the reverse, where an enemy vehicle could be disguised with low-tech methods to make an AI targeting system think it is a friendly vehicle and not an immediate threat.
To counteract these issues, Turek says that DARPA joined up with industry partners to see if they could enhance AI systems to protect them from falling prey to such low-tech efforts. The result, they say, is a fully evolved GARD program, which can now offer some impressive tools to anticipate and counteract such attacks.
“Whether that is physically realizable attacks or noise patterns that are added to AI systems, the GARD program has built state-of-the-art defenses against those,” said Turek. “Some of those tools and capabilities have been provided to the (Chief of the Digital AI Office) CDAO.”
In his statements, Turek noted that these capabilities could bolster efforts across the entire defense department ecosystem to counteract adversaries trying to trick our advanced AI systems. This includes the agency itself, which is a strategic target for adversaries looking to cripple America’s advanced defense technologies.
“Products that come out of those research programs could go a couple places … Transitioning them to CDAO, for instance, might enable broad transition across the entirety of the DOD,” Turek said. “I think having an organization that can provide some shared resources and capabilities across the department [and] can be a resource or place people can go look for help or tools or capabilities — I think that’s really useful. And from a DARPA perspective, it gives us a natural transition partner.”
In a virtual nod to the civilian and commercial infrastructure that could also fall victim to AI trickery, the DARPA executive said program officials hope to one day transition some key elements from the GARD program to their non-military partners.
“We have created new algorithms—some of those actually in partnership both with the research teams that we’re funding and with researchers at Google—and then created open-source tools that we can provide back to the broader community so that we can really raise defenses broadly in AI and machine learning,” Turek said. But those tools [are] also provided to CDAO, and then they can be customized for DOD use cases and needs.”
While many of the details about how the GARD program will actually work remain classified, Turek said that the proliferation of AI across the defense and military landscape makes the program’s efforts both timely and critical.
“There is really broad penetration across the agency,” Turek said. “So it’s really difficult to sum up, you know, what the agency as a whole is up to, but from an [information innovation office] perspective, we’re really looking to try and advance … how do we get to a highly trustworthy AI that we can bet our lives on and [ensure] that not be a foolish thing to do.”
Looking forward, the GARD program leadership says they believe the program is presently in its early stages. However, they also believe continued advancements to the GARD toolkit are not only coming but are a core part of what DARPA was designed to do.
“DARPA’s core mission [is to] prevent and create strategic surprise,” Turek explained. “So the implication is that we’re looking over the horizon for transformative capabilities. So in some sense, we are very early in the research pipeline, typically.”
Christopher Plain is a Science Fiction and Fantasy novelist and Head Science Writer at The Debrief. Follow and connect with him on X, learn about his books at plainfiction.com, or email him directly at christopher@thedebrief.org.