Artificial intelligence is increasingly shaping military strategies and battlefield operations, and the U.S. Department of Defense (DoD) is looking to ensure the security and resilience of its AI battlefield systems.
The Defense Advanced Research Projects Agency (DARPA) recently announced the launch of the Securing AI for Battlefield Effective Robustness (SABER) program. This program is designed to establish an operational AI red teaming process for assessing vulnerabilities in AI-enabled defense systems.
With potential adversaries continuously looking for ways to exploit weaknesses in artificial intelligence, SABER will attempt to ensure that America’s military technological edge remains intact.
As the Pentagon prepares for a future where AI plays a central role in warfare, the initiative will aim to identify and mitigate security risks, preventing potential manipulations that could compromise critical missions.
“Today, there is still a limited ability to operationally assess deployed military AI-enabled systems for adversarial vulnerabilities, and the ‘theoretical’ adversarial AI attacks have not been practically demonstrated in operational settings,” a special notice posted by DARPA reads. “As a result, the operational security risks of AI-enabled battlefield systems remain largely unknown.”
The Pentagon has long recognized AI’s potential to enhance battlefield operations. Autonomous systems powered by AI can improve decision-making speed, increase accuracy, and reduce the burden on human soldiers.
The expanding use of AI ranges from reconnaissance drones to automated threat detection systems, integrating these technologies into military frameworks at an unprecedented pace.
However, these advancements come with significant risks. Machine learning models can be vulnerable to adversarial attacks, in which an enemy manipulates input data to deceive a system.
This could include instances where corrupted data skews a model’s ability to function correctly or the use of deceptive signals that trick computer vision systems into misidentifying objects. More concerning are model-stealing attacks that could allow adversaries to replicate and exploit U.S. AI battlefield systems.
Despite extensive research efforts over the past decade highlighting these risks, practical assessments of AI vulnerabilities in real-world settings have remained limited. The Pentagon is now taking proactive measures to ensure deployed AI systems can withstand real-world threats.
DARPA’s SABER program represents the growing need for a robust AI security framework. A recent Proposers Day announcement reveals the program is looking to bring together an elite team of specialists tasked with “red-team” testing and evaluating battlefield systems that rely on artificial intelligence.
The SABER AI red-team will develop and employ advanced adversarial AI, cyber, and electronic warfare techniques to assess weaknesses and develop mitigation strategies. The initial focus will be on autonomous ground and aerial systems expected to be deployed within the next 1-3 years.
To flesh out DARPA’s AI-red team, SABER seeks expertise in various disciplines, including adversarial AI techniques, cybersecurity methodologies, physical security measures, and integrating security tools into a comprehensive AI-red teaming framework. The overarching goal will be to establish a sustainable model for assessing AI battlefield systems’ security before they are deployed in combat scenarios.
To facilitate the program’s objectives, DARPA is hosting a hybrid Proposers Day on March 12, 2025, in Arlington, Virginia. The event, which is not open to the public, aims to provide information on the program’s technical goals, address questions, and encourage collaboration among researchers and industry leaders.
The Proposers Day will include three sessions: one restricted to only U.S. citizens, one limited to U.S. citizens from American organizations, and finally, one open to non-U.S. participants.
The Pentagon has not announced the contract award amounts for the SABER program. However, the program will be run out of DARPA’s Information Innovation Office.
As AI-driven warfare becomes inevitable, adversarial nations are almost assuredly developing their own capabilities to compromise defense technologies. By investing in proactive AI battlefield systems security measures, the U.S. will try to avoid potential threats, reinforcing its position as a global leader in defense technology.
Ultimately, the implications of artificial intelligence security extend far beyond the battlefield. If left unprotected, vulnerabilities in AI systems could be exploited in military operations and civilian infrastructure, financial systems, and national security frameworks.
SABER represents a crucial step in red-teaming AI systems, safeguarding them from potential threats and ensuring that technological advancements do not become liabilities.
While SABER seeks to enhance AI security, the role of artificial intelligence in warfare remains a contentious issue. Concerns about autonomy in lethal decision-making and the potential for unintended consequences exist.
Several DoD programs are currently exploring the ethical and legal boundaries of AI and increasing the trustworthiness of systems involved in national security missions.
Additionally, the rapid evolution of AI capabilities necessitates security measures that can continuously adapt. Adversarial techniques are becoming more sophisticated, requiring ongoing research and development to keep pace with emerging threats. The challenge lies in identifying vulnerabilities and ensuring that AI battlefield systems remain resilient over time.
By proactively identifying and mitigating vulnerabilities, the U.S. military is taking a crucial step toward ensuring that future warfare remains effective and secure.
In a recent memo, Secretary of Defense Peter Hegseth identified “Critical Cybersecurity” as one of 17 “America First” priorities for national defense. The Pentagon’s advancement with SABER reflects a broader strategic shift, recognizing both artificial intelligence’s potential and risks in military operations.
As AI continues to redefine the modern battlefield, programs like SABER will be critical in determining the resilience of U.S. defense capabilities for the foreseeable future.
Tim McMillan is a retired law enforcement executive, investigative reporter and co-founder of The Debrief. His writing typically focuses on defense, national security, the Intelligence Community and topics related to psychology. You can follow Tim on Twitter: @LtTimMcMillan. Tim can be reached by email: tim@thedebrief.org or through encrypted email: LtTimMcMillan@protonmail.com
