facial recognition
Image: Pixabay

Facial Recognition Drones Racially Profile But They Don’t Have To

A start-up is poised to usher in the next era of biometric identification technology: drones using AI-powered facial recognition technology.

AnyVision claims these drones are designed for the private sector. However, it’s highly likely this agile biometric technology will end up seeing use in the law enforcement and defense sectors. AnyVision’s innovative drones raise long-standing questions about the ethics of privacy and facial recognition technology’s ongoing racial discrimination issues.

BACKGROUND: The Rise of Facial Recognition Technology

An Israeli start-up, AnyVision, submitted a patent application in 2019 for drones built with AI-powered facial identification technology. Published this year and titled “Adaptive Positioning of Drones for Enhanced Face Recognition,” the patent signals a new frontier in biometrics: photo-taking drones armed with an identification algorithm boasting a high probability of accuracy.

An AnyVision drone will assess and photograph its target from the best available angles and proximity. The drone then uses “face classification” and a “probability score” to identify its target with a high degree of accuracy. If the probability score is below a certain threshold, the drone repeats the process until it can ensure the identification scores as “highly probable.”

In a statement to Fast Company early this year, AnyVision CEO Avi Golan dismissed speculation that the drones were designed to be weaponized. “AnyVision is not involved in weapons development and is focused on the many opportunities in the civilian market,” said Golan.

AnyVision’s website lists several civilian industries such as “banking” and “healthcare” as its target buyers. However, defense and law enforcement regularly utilize drones and facial identification algorithms of varying sophistications. Indeed, a Georgetown University Center on Privacy and Technology study showed that more than half of police jurisdictions nationwide already use facial recognition and identification technology. It stands to reason that innovation such as AnyVision’s is likely to be adopted for policing and defense in the future.

 


ANALYSIS: Why Facial Recognition Suffers from Racial Profiling

Drones and facial recognition technology are well-integrated into civilian life and in defense. However, studies have shown that both are subject to scrutiny over racial profiling and concerns over shrinking privacy in an increasingly surveilled world.

Mugshots are already used for recognition and identification algorithms, which raises concern over new innovations perpetuating the criminalization of minorities. The sheer number of Americans living with their information inside police databases is staggering. Harvard researchers estimate, “Almost half of American adults – over 117 million people, as of 2016 – have photos within a facial recognition network used by law enforcement.” 

The disproportionate number of arrests of Black Americans means they are invariably overrepresented in these algorithms. The ACLU points out: “Since Black people are more likely to be arrested than white people for minor crimes like cannabis possession, their faces and personal data are more likely to be in mugshot databases.” 

Racial discrimination is likely to be exacerbated if innovators such as AnyVision are not prepared to address the sprawling discriminatory policing practices that have seen Black identities overrepresented in the first place.

Just as troubling is low accuracy, even in sophisticated facial recognition systems, in identifying Black faces. 

While White, middle-aged male faces saw the highest probability of identification accuracy, researchers have found consistently lower accuracy rates for faces that were “female, Black, and 18-30 years old.” Darker-skinned females saw inaccuracy rates up to 34% higher than their lighter-skinned female counterparts. 

Inaccuracies can have dangerous — even lethal — consequences. Wrongful arrests and police violence, both of which already disproportionately affect Black communities U.S.-wide, could be exacerbated when systemic racism and implicit trust in technological accuracy collide.

While efforts to improve equity in identification are being made by thorough identification systems such as the one AnyVision proposes, how actors wielding the product will choose to implement it raises concerns among organizations such as the ACLU.

AnyVision’s brief entitled “A Return to Gyms and Fitness Centers” details how one of its cutting-edge identity alert functions might be turned into a “watchlist” for anyone a gym or other membership-based center designates as unwanted, suspicious, or “[has] had a history of violence.” What constitutes undesirability is entirely at the discretion of the customer using AnyVision’s product. 

Improved accuracy in identifying Black faces is meaningless if actors use the product to enact prejudices. Questions remain regarding how a fitness center, police department, or even AnyVision itself might balance newfangled identification capabilities with the possibility of abuse.

More broadly, concern over freedom of expression and privacy in public spaces dogs the advent of drone identification technology. Fitting AI-powered facial identification to a drone means a dragnet search of even large groups can occur without a single individual being aware of it. Lack of transparency about the process and lack of public cooperation means identities can be collected in secret and used for various purposes without consent.

The Georgetown Center on Privacy and Technology has raised concerns over facial recognition technology’s chilling effect on freedom of speech and expression. The First Amendment is, as of now, unclear on the use of facial recognition and identification technology: AnyVision’s drones will be flying to the frontiers of privacy, leading experts to consider what privacy might look like in the near future for a U.S. citizen.

 

facial recognition
What will be the future of biometric data? (Image: Pixabay)


Given the legal ambiguity and that police were documented videotaping 2020 protests against police brutality, it does not seem unthinkable that police would take the next step and include facial identification technology in their monitoring of free speech activities. Indeed, 
Georgetown researchers found that among law enforcement agencies, including the FBI and Department of Homeland Security, “… almost none of the agencies using face recognition have adopted express prohibitions against it being used to track political or other First Amendment activity.” 

An environment of fear and individuals seeking self-preservation could lead to a diminishment of free speech and other forms of expression that are ostensibly protected in the U.S.

The U.S. military has begun grappling with possible moral dilemmas of expanding AI and surveillance technology development. As recently as February 2021, the U.S. Army Combat Capabilities Development Command (known as DEVCOM) acknowledged that as autonomous machines integrate more fully into public life and defense, it was essential to develop AI that would make decisions that civilians would perceive as ethical. 

Acknowledging that this topic was “understudied,” much of the military’s current research focuses on mitigating life-threatening situations where casualties may be expected. The stickier dilemmas facing AI ethics, such as racial profiling and diminishing public anonymity, remain.

The contentious issues surrounding AnyVision and identification algorithms came to a head when it was widely publicized that AnyVision’s facial recognition tech was used at checkpoints between the West Bank and Israel. Reuters reported that in 2020, the controversy pushed Microsoft to sell its stake in AnyVision, and swear off any investments in facial recognition technology in the future.

However, as the technology becomes available, it is all but certain that it will become integrated and normalized in daily life and national defense despite pushback. According to the National Security Commission on Artificial Intelligence (NSCAI), a surefire way to ensure AI technology is used responsibly and effectively in the U.S. is to keep national security departments ahead of the technological curve, as opposed to shying away from it. In early 2021, the NSCAI released a report detailing recommendations for productive engagement with AI-driven technology. It noted that international adversaries, domestic criminals, and terrorists could and do employ this technology. Therefore it is incumbent on the federal government to understand — and defend against — AI-powered offenses.

To promote responsible handling of AI such as facial recognition, the NSCAI advised that the federal government swiftly move to integrating, upgrading, and teaching the cutting-edge technology for U.S. security and advancement: it recommended that by “2025, the foundations for widespread integration of AI across DoD must be in place.” Regarding human engagement with the technology, the NSCAI noted: “The human talent deficient is the government’s most conspicuous inhibitor to buying, building, and fielding AI-enabled technologies for national security purposes.” Without individuals willing and able to improve and advance the technology, the U.S. risks vulnerability to bad actors at home and abroad. Further, individuals trained and sensitive to bias can detect can improve accuracy when an algorithm may fail.

Meanwhile, industry professionals and scholars are engaged in dialogue regarding how to best meet the technology’s ethical challenges. AnyVision, situated as a company on the cutting edge of facial recognition technology, took visible steps to address public contention with, and challenges of, the innovation at Fordham University Law School’s winter 2020 “Facial Recognition: Challenges and Solutions” conference. Enrico Montagnino, AnyVision’s VP of Europe, Middle East, and Africa, recognized racial bias as a potential pitfall of the technology, stating, “‘Facial recognition solutions must be devoid of algorithmic bias. In computer vision, without proper safeguards, bias may exist based on race or ethnicity, gender, and age.'” To combat potential bias, Montagnino offered AnyVision’s commitment to using “diverse data sources” to train its AI in overall accuracy.

AnyVision has also advocated for a technologically savvy legal framework to keep abreast of AI-borne challenges. Montagnino remarked: “‘The challenge is to build a legal framework that will allow such systems to operate accurately and fairly. … The legal framework for facial recognition is being written right now. In most countries, there is still no appropriate regulation for the operation of facial recognition and artificial intelligence technology in general.'” AnyVision recognizes that law can close the gaps that may allow for misuse and abuse of their products and that a sound system of governance can assuage public concerns over facial recognition technology’s integration into daily life.

OUTLOOK: The Future Use of Facial Recognition Technology

As AnyVision’s drone innovation goes live, the heated debate surrounding its use in escalating both racial discrimination and surveillance will undoubtedly mount.

Speaking to Forbes, Sacramento Police Department lieutenant Mike Hutchins recognized the ambivalence surrounding sophisticated and pervasive identification systems. He said of police drones utilizing facial recognition, “We’re trying to balance technology with people’s right to privacy. And obviously, if you walk into a grocery store, into a retail store, into a bank, they’re capturing your face as you walk in. Pretty much anywhere you go in public now, your face is being captured by cameras that are clearly capable of running facial recognition software.” 

The Pandora’s Box of technology cannot be unopened; only how actors, such as law enforcement, choose to implement it can make it an innovation for good or ill.

Researchers will undoubtedly continue puzzling through how AnyVision and others can avoid the challenges systemic racism and diminishing privacy pose. What AnyVision and its targeted industries, including law enforcement and military, choose to do with the knowledge remains to be seen.

Follow us on Twitter, Facebook, and Instagram, to weigh in and share your thoughts. Are drones with facial recognition software going to cause more harm than good? You can also get all the latest  news and exciting feature content from The Debrief on Flipboard, and Pinterest. And don’t forget to subscribe to The Debrief YouTube Channel to check out The Official Debrief Podcast.