AI blind spot
(Adobe Firefly/AI generated image)

Vulnerabilities in Deep Learning for Remote Sensing Expose AI’s “Blind Spot”

Chinese researchers have uncovered a significant vulnerability in relying on AI models powered by Deep Neural Networks (DNN) for remote sensing applications.

From intelligence and transportation to climate monitoring and disaster management, Deep Learning (DL) models have taken on a crucial role in analyzing data from remote sensors. However, a research team from Northwestern Polytechnical University and Hong Kong Polytechnic University discovered that AI systems are prone to errors in judgment and highly vulnerable to adversarial exploitation.

Image Analysis by AI Models

Today, AI models perform tasks that were once the exclusive domain of trained human analysts. Airborne and satellite sensors collect raw visual data, and deep learning models sift through that data to identify objects and extract actionable intelligence. Although AI models can surpass human performance in some scenarios, they lack the intuitive reasoning and creativity of the human mind. These models may produce seemingly accurate results, but their rationale can be flawed. The opacity in how DNNs operate raises additional concerns. The research team aimed to investigate just how deep these vulnerabilities run.

“We sought to address the lack of comprehensive studies on the robustness of deep learning models used in remote sensing tasks, particularly focusing on image classification and object detection. Our aim was to understand the vulnerabilities of these models to various types of noise, especially adversarial noise, and to systematically evaluate their natural and adversarial robustness,” explained lead author Shaohui Mei from the School of Electronic Information at Northwestern Polytechnical University in Xi’an, China.

Investigating AI Effectiveness

To investigate, the team began by reviewing existing research and developing benchmarks to assess how well AI models could detect and classify objects in images. They paid special attention to challenging conditions. How did factors such as random noise or inclement weather affect the AI’s accuracy?

In addition to evaluating natural conditions, the researchers also examined how AI models could be vulnerable to digital and physical attacks. Digital attacks on AI models are already well understood. The team tested a variety of known attack methods, including the Fast Gradient Sign Method (FGSM), AutoAttack, Projected Gradient Descent, Carlini & Wagner, and Momentum Iterative FGSM. Physical attacks, in contrast, don’t involve damaging equipment but rather trick the AI by placing or painting patches that interfere with its ability to recognize objects in its environment.

Vulnerabilities Revealed

The physical world presents DL models with various natural challenges. Environmental noise, such as fog or rain, can reduce the clarity of data the AI relies on, making object identification more difficult. Additionally, routine wear and tear on sensing equipment can degrade data quality, forcing AI to work with increasingly poor-quality images. Researchers emphasize the importance of training DNNs not only in ideal conditions but also in adverse scenarios to prepare for real-world applications.

Digital attacks often involve one AI system attacking another, exploiting the same weaknesses in both. In this situation, the more robust AI is likely to prevail, but techniques like “momentum” or “dropout” can significantly improve the performance of even relatively weak models. As a result, it is crucial for researchers to gain a deeper understanding of how AI makes decisions to identify and address potential vulnerabilities before models are deployed in critical operations.

One of the most notable findings was that physical manipulation can be just as effective as digital manipulation. Of particular interest is the role of background manipulation: the researchers discovered that altering the background of an object made it harder for DNNs to recognize that object, even more so than changing the object itself. Patches or visual interference in the background severely impaired the models’ ability to identify targets.

While much of the focus on defending against attacks on image recognition models has been on digital fronts and object-related issues, the team’s new study shows that physical manipulation of backgrounds is a highly effective tactic against DL models.

“[Our] next step[s] involve further refining our benchmarking framework and conducting more extensive tests with a wider range of models and noise types,” Mei said.

“Our ultimate goal is to contribute to developing more robust and secure DL models for RS, thereby enhancing the reliability and effectiveness of these technologies in critical applications such as environmental monitoring, disaster response, and urban planning,” Mei added.

Ryan Whalen covers science and technology for The Debrief. He holds a BA in History and a Master of Library and Information Science with a certificate in Data Science. He can be contacted at ryan@thedebrief.org, and follow him on Twitter @mdntwvlf.