The Department of Defense’s Inspector General has announced the launch of a joint evaluation with the National Security Agency’s (NSA)IG to assess the NSA’s integration of artificial intelligence in signals intelligence operations.
In March, the National Security Commission on Artificial Intelligence (NSCAI) issued its final report, saying the U.S. is currently unprepared for the coming AI era and cyber-competition with China. Formed by congressional mandate in 2019, the NSCAI noted to maintain “AI superiority,” America’s defense and intelligence communities would need to be “AI-ready” by 2025.
“The U.S. government is not prepared to defend the United States in the coming artificial intelligence (AI) era. AI applications are transforming existing threats, creating new classes of threats, and further emboldening state and non-state adversaries to exploit vulnerabilities in our open society,” the report reads.
Some analysts have described the current AI arms race and geopolitical and military tensions between the U.S. and China to be a Second Cold War.
In addition to competing with foreign AI systems that could threaten national security, the immense benefits of AI in defense and intelligence applications have been increasingly heralded by U.S. officials, lawmakers, and industry experts.
AI’s ability to comb through enormous amounts of data under varying and unpredictable circumstances to make decisions without significant human oversight can be invaluable to the realm of intelligence collection and assessment. AI’s unique capabilities are particularly significant for the NSA, the agency responsible for America’s global collection and processing of signals intelligence.
For the many benefits artificial intelligence and machine learning can offer, the Pentagon has faced equally as many difficulties in implementing software and algorithms with military systems in the past few years.
Project Maven offers an excellent example of the advantages and controversial nuances surrounding the DoD’s use of AI.
With Project Maven, the Pentagon uses machine learning to comb through imagery captured by unmanned aerial vehicles (UAV), with AI algorithms capable of automatically identifying hostile activity for targeted strikes. With this, AI can quickly perform work that would otherwise be performed by human analysts, freeing up countless man-hours and allowing more timely decisions to be made about collected data.
Project Maven equally became a source of controversy when in 2019, Google withdrew its partnership with the DoD after over 3,000 employees signed a petition expressing concerns about the tech giant’s involvement with the program.
At a 2019 press event for the release of the NSCAI’s interim report, senior vice president of global affairs for Google, Kent Walker, explained Google’s stepping away from Project Maven as “pressing the reset button until we had the opportunity to develop our own set of AI principles, our own work with regard to internal standards and review processes. But that was a decision focused on a discrete contract — not a broader statement about our willingness or our history of working with the Department of Defense.”
In a late 2020 report, the Congressional Research Service noted “an apparent cultural divide between DoD and commercial technology companies” has created difficulties integrating AI into defense applications. The report cited one recent survey which found nearly 80% of leadership at several prominent Silicon Valley companies rated the commercial technology community’s relationship with the DoD as poor or very poor.
Technology such as GPS and the internet was developed under defense-directed programs before eventually spreading to the private sector and being adapted for commercial use. Today, the tables are turned, and private industry is leading the way with AI development, requiring the DoD to later try and adapt these technologies for military use.
As the former Director of the Center for Information Technology Policy, Dr. Edward Felten, noted at the 2017 Center for Strategic and International Studies Global Security Forum, “It is unusual to have a technology that is so strategically important being developed commercially by a relatively small number of companies.”
Some private technology companies have also expressed hesitation in working with the DoD on AI over intellectual property and data rights concerns. A report by the General Accounting Office noted that technology companies consider intellectual property to be its “lifeblood.” Yet, the DoD typically requires unlimited technical data and software rights for products produced by government acquisitions, causing some major companies to avoid pursuing government contracts.
Because of the Pentagon’s current reliance on the commercial technology sector, feelings of distrust, hesitation, or ethical concerns over the use of AI, present a severe problem to the Pentagon’s ability to meet the NSCAI’s goal of being “AI Ready” within the next four years.
In hopes of alleviating some of Silicon Valley’s ethical concerns over the use of AI, in May 2021, Deputy Secretary of Defense Kathleen Hicks issued a memorandum on “Implementing Responsible Artificial Intelligence in the Department of Defense.” The memo outlined five ethical principles for AI use, requiring products to be: Responsible, Equitable, Traceable, Reliable, and Governable.
In August 2020, the Office of the DoD Inspector General first announced it would be conducting an assessment of the NSA’s integration of artificial intelligence in its signals intelligence operations. With this recent announcement, the IG’s office says that the previous review has been terminated. The current evaluation will now move forward as a joint project between the DoD and NSA IG Offices.
The recent joint assessment announcement provided few details on what specifically investigators will be accessing with the NSA’s use of artificial intelligence.
Because the current inquiry is characterized as an “assessment” and not an “investigation,” the IG’s involvement is not related to any alleged criminal or administrative violations involving the NSA’s integration of AI systems.
In a previous interview with The Debrief regarding the Inspector General’s assessment of the DoD’s handling of “unidentified aerial phenomena,” Chief of Communications for the DoD IG, Dwrena Allen, clarified, “The main differences between evaluations and an investigation are that investigations have subjects and complainants, witnesses and allegations,” explained Allen. “Evaluations assess processes, policies, and procedures to identify, verify or review whether compliance gaps exist and if so, provide recommendations on how to address them.”
That said, the joint memo signed by Assistant Inspector General for Evaluations with the DoD, Randolph Stone, and Assistant Inspector General for Audits with the NSA, Jamal Hall, notes, “We may revise the objective as the evaluation proceeds, and we will also consider suggestions from DoD and National Security Agency management on additional or revised objectives.”