Lavender
(Image Source: Israeli Defense Force/X)

IDF Reportedly Using AI system, “Lavender,” to target militants in Gaza, Sparking Debates on the Future of AI Warfare.

In a controversial revelation, the Israeli Defense Forces (IDF) have reportedly been employing an artificial intelligence (AI) system, dubbed “Lavender,” to identify militants for targeted bombings in its ongoing war against Hamas in Gaza. 

This significant pivot towards integrating AI in lethal military operations opens Pandora’s box of ethical, legal, and humanitarian questions, signaling a potentially seismic shift in warfare.

The existence of Lavender was first brought to light by a joint investigation by the Israeli online magazines +972 and Local Call. Both outlets are known for their critical stance on Israeli policies in Palestinian territories. 

In a lengthy press statement, the IDF did not outright dispute the Lavender’s existence but nuanced their acknowledgment by clarifying that AI was not being used directly to identify suspected terrorists.  

Instead, the IDF stressed that Lavender was not a “system” but instead “a database whose purpose is to cross-reference intelligence sources, to produce up-to-date layers of information on the military operatives of terrorist organizations.” 

“According to IDF directives, analysts must conduct independent examinations, in which they verify that the identified targets meet the relevant definitions in accordance with international law and additional restrictions stipulated in the IDF directives,” the IDF said. 

The IDF has been at war with Hamas since October 2023, after the designated terrorist group launched an unprovoked attack that involved the rape, kidnap, and murder of nearly 1,200 Israelis, including women and children.  

According to investigations and Israeli intelligence sources cited by +972 and Local Call, Lavender is designed to mark suspected operatives within the military wings of Hamas and Palestinian Islamic Jihad (PIJ) for possible airstrikes. 

Reportedly, the system identifies targets by sifting through vast amounts of data collected from a myriad of surveillance and intelligence sources across the Gaza Strip. This includes everything from social media activity and communication patterns to location data, effectively turning individuals’ entire digital footprints into variables for analysis. 

By leveraging machine learning algorithms, Lavender assesses these data points to rank individuals based on the likelihood of their association with militant organizations, prioritizing them as potential targets for military action.

Despite assurances from the IDF, the most controversial aspects of Lavender’s operation are its reported error rate and the minimal human oversight in the final targeting decisions. 

Sources indicated that Lavender operates with a 10% margin of error and occasionally identifies individuals with only tenuous links to militant groups as legitimate targets.

Additionally, human analysts purportedly only spend about “20 seconds” reviewing each AI-generated target, often merely confirming the target’s gender as a proxy for accuracy before proceeding. 

The procedure described by sources suggests that the IDF relies heavily on AI’s judgment, reducing human involvement to a cursory formality. If true, this raises profound ethical and legal concerns about accountability and the potential for civilian casualties.

According to four intelligence sources, Lavender had designated approximately 37,000 Palestinians as Hamas militants for targeting, the majority being of lower ranks. The IDF refuted these claims, reiterating that the “system” [Lavender] is simply a database and “not a list of confirmed military operatives eligible to attack.” 

During its investigation, +972 and Local Call said they also uncovered another Israeli AI system called “Where’s Daddy” that uses machine learning to track and identify when targeted individuals are at home to carry out bombings. According to sources, this strategy of pursuing strikes on residences often resulted in the killing of a target’s entire family and the deaths of “thousands of Palestinians” not involved in the fighting. 

“We were not interested in killing [Hamas] operatives only when they were in a military building or engaged in a military activity,” an unnamed intelligence officer was quoted saying. “On the contrary, the IDF bombed them in homes without hesitation as a first option. It’s much easier to bomb a family’s home. The system is built to look for them in these situations.”

In response, the IDF vehemently denied carrying out strikes that would potentially result in high civilian casualties or acting outside the confines of international humanitarian law. 

“The IDF does not carry out strikes when the expected collateral damage from the strike is excessive in relation to the military advantage,” the IDF said. “As for the manner of carrying out the strikes – the IDF makes various efforts to reduce harm to civilians to the extent feasible in the operational circumstances ruling at the time of the strike.” 

“The IDF outright rejects the claim regarding any policy to kill tens of thousands of people in their homes.” 

The news of Israel purportedly using AI to determine targets for airstrikes comes amid heightened scrutiny over Israel’s military tactics, particularly following an airstrike on vehicles belonging to the charity World Central Kitchen, which resulted in the deaths of seven aid workers

Israeli Prime Minister Benjamin Netanyahu called the strike “unintentional.” IDF Chief of the General Staff Herzi Halevi described it as “a mistake that followed a misidentification.” 

“I want to be very clear – the strike was not carried out with the intention of harming WCK aid workers. It was a mistake that followed a misidentification – at night during a war in very complex conditions. It shouldn’t have happened,” Halevi said in a video statement

It remains unclear if Lavender, or any similar AI system, played a role in targeting before the strike on the aid convoy.

Aside from Lavender or the use of AI in modern warfare, Israel’s ability to reduce collateral damage in its ongoing war in Gaza is complicated by Hamas’ well-documented use of civilians as “human shields.” The designated terrorist organization frequently positions underground bunkers and weapon depots under or near schools, mosques, and hospitals.

In a December interview with Russia Today’s Arabic channel, Moussa Abu Marzouk, a prominent member of Hamas’s political bureau, said Hamas dit not have a responsibility to protect civilians living along the Gaza Strip. Instead, Abu Marzouk suggested the obligation to protect noncombatants fell on Israel and the United Nations. 

Nevertheless, these recent reports of Lavender amplify concerns over the use of AI in conflict zones, highlighting the stark risks and unintended consequences of such military innovations.

The use of AI for lethal purposes carries profound ethical ramifications, yet the legal framework governing lethal autonomous weapons is struggling to keep up with the burgeoning technology. 

In November 2023, the United Nations approved a new resolution on the use of lethal AI weapons, requiring that a system “must not be in full control of decisions involving killing or harming humans.” 

The U.S. Department of Defense’s policy on lethal autonomous weapons is relatively vague. It notes that an AI system must “allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”    

Israel’s deployment of Lavender appears to represent the first known instance of a military utilizing AI to directly facilitate lethal actions in warfare. Recent reports, however, indicate that the system does not directly execute these actions. Instead, it plays a crucial role in selecting potential targets, creating a subtle loophole within the existing international legal frameworks addressing lethal AI applications.

This unprecedented step underscores the urgent need for a robust international dialogue on the governance of AI in military operations, emphasizing the importance of ethical considerations and the protection of civilian lives.

Ultimately, the discourse surrounding “Lavender” could set a precedent for the future of AI in warfare, compelling a reevaluation of existing legal frameworks and ethical guidelines to address the complexities introduced by these emerging sophisticated technologies.

Tim McMillan is a retired law enforcement executive, investigative reporter and co-founder of The Debrief. His writing typically focuses on defense, national security, the Intelligence Community and topics related to psychology. You can follow Tim on Twitter: @LtTimMcMillan.  Tim can be reached by email: tim@thedebrief.org or through encrypted email: LtTimMcMillan@protonmail.com