The Pentagon has announced the creation of a new generative artificial intelligence (AI) task force that aims to examine and incorporate large language models (LLMs) and other AI capabilities throughout the Department of Defense (DoD).
Speaking with reporters at a press briefing at the Pentagon late on Thursday, Pentagon Press Secretary Air Force Brig. Gen. Pat Ryder said the new initiative, called Task Force Lima, “will assess, synchronize and employ generative artificial intelligence across the department.”
“Deputy Secretary of Defense Kathleen Hicks directed the organization of the task force to minimize risk and redundancy while the department pursues generative A.I. initiatives, including large language models,” Ryder told reporters yesterday.
In a statement released by the Pentagon this week, Hicks said Task Force Lima, “underlines the Department of Defense’s unwavering commitment to leading the charge in AI innovation.”
Today, @DepSecDef signed Task Force Lima into effect. This @DeptofDefense-wide generative AI task force will examine generative AI use cases from across the federal government & develop recommendations on responsibly using these powerful tools. Learn more: https://t.co/GELM3aJqRf
— DOD Chief Digital & AI Office (@DODCDAO) August 10, 2023
The task force, under the leadership of the DoD’s Chief Digital and Artificial Intelligence Office, aims to ensure that the DoD can harness the most state-of-the-art technologies in the interest of national security, while responsibly harnessing the capabilities of AI.
Dr. Craig Martell, the DoD’s Chief Digital and Artificial Intelligence Officer, said the DoD must maintain an awareness of the responsible implementation of generative AI models, and an ability to recognize protective measures that may help to prevent national security challenges that may result from AI if poorly managed.
“We must also consider the extent to which our adversaries will employ this technology and seek to disrupt our own use of AI-based solutions,” Martell said in a statement.
Concerns over the misuse of AI, as well as the unforeseen consequences that could result from its development, have prompted warnings from tech industry experts in recent months.
“Mitigating the risk of extinction from AI should be a global priority,” read part of a statement that appeared on the website of the Center for AI Safety (CAIS) in May, which grouped AI with pandemics and nuclear war among the significant dangers with the potential for impacting life on Earth.
Computer scientist Geoffrey Hinton, widely acknowledged as the “godfather of AI,” has garnered attention over the last several months following his departure from Google, where he had continued his AI research as part of the tech giant’s operations. Hinton, who left his position to allow him to speak more freely about his growing concerns about AI, has since become one of the most prominent cautionary voices in the AI debate.
“They still can’t match us, but they’re getting close,” Hinton said of the quickly expanding capabilities of AI during a three-day Collision technology conference in Toronto earlier this year.
During a recent on-camera interview with CNN, Martell emphasized that any decisions involving DoD operations will always be made by humans, and not autonomously by any form of AI.
“It’s very clear to all of us that there’s always a responsible human who makes the decision,” Martell told CNN host Christiane Amanpour.
“It will always be the case that somebody has decided that we are going to leverage a particular technology, and it will always be the case that someone will be responsible. There will be a responsible agent.”
“We don’t imagine a world where machines are making these sorts of decisions on their own,” Martell added.
Despite the concerns many have regarding the rate at which AI is expanding, the DoD says that its adoption and integration into its operations will offer significant benefits to the enhancement of national security.
U.S. Navy Capt. M. Xavier Lugo, the mission commander of Task Force Lima and a member of the CDAO’s Algorithmic Warfare Directorate, said in a statement that the DoD “recognizes the potential of generative AI to significantly improve intelligence, operational planning, and administrative and business processes.
“However, responsible implementation is key to managing associated risks effectively,” Lugo added.
Kathleen Hicks said this week that the DoD’s focus “remains steadfast on ensuring national security, minimizing risks, and responsibly integrating these technologies.”
“The future of defense is not just about adopting cutting-edge technologies,” Hicks added, “but doing so with foresight, responsibility, and a deep understanding of the broader implications for our nation.”
Micah Hanks is the Editor-in-Chief and Co-Founder of The Debrief. He can be reached by email at micah@thedebrief.org. Follow his work at micahhanks.com and on Twitter: @MicahHanks.