This use of artificial intelligence to create compelling false images or videos depicting people doing or saying something that has never actually occurred, commonly known as “deepfakes,” is a reality people have become increasingly familiar with. However, a group of scientists from Washington State University is sounding the alarm about another deepfake concern that could become a growing problem in the near future.
“Deepfake geography” or “location spoofing.”
“This isn’t just Photoshopping things. It’s making data look uncannily realistic,” said Dr. Bo Zhao, an assistant professor of geography at the University of Washington and lead author of a study of deepfake geography published April 21 in the journal Cartography and Geographic Information Science.
“The techniques are already there. We’re just trying to expose the possibility of using the same techniques and of the need to develop a coping strategy for it.”
In the recent study, Zhao and his co-authors point out inaccuracies in mapmaking are hardly new and extend into ancient times. To some extent, inconsistencies are unavoidable due to the nature of translating physical locations and features into readable map form.
While it is nearly impossible to capture exact geographic details on a map, not all inaccuracies are unintended. For various reasons, mapmakers will sometimes place false mountains, rivers, or even “paper towns” on maps showing features or cities that don’t actually exist.
The fake towns were added by then-Chairman of the Michigan State Highway Commission, Peter Fletcher, as a nod to his alma mater, the University of Michigan, and a jab at Michigan’s bitter college football rival, Ohio State University. “Goblu” representing “Go Blue,” and “Beatosu” more directly signaling, “Beat OSU.”
Trolling your college football rival with fake towns on a map is relatively harmless, and except for Ohio State fans, at minimum, worth a good chuckle. However, in an increasingly data and computational-driven era, sophisticated deepfake geographic spoofing is a genuine concern and legitimate national security risk.
The potential for deepfake satellite imagery is especially concerning given National Geospatial-Intelligence Agency (NGA), the U.S. agency primarily responsible for collecting, analyzing, and distributing geospatial intelligence, has begun increasingly using unclassified, open-source imagery to monitor activities around the globe.
In a speech at the GEOINT Symposium in 2019, the director of NGA, Vice Admiral Robert Sharp, said, “Most of the innovation now happening in geospatial intelligence centers around automation — using artificial intelligence algorithms to analyze imagery and combine that data with other sources of intelligence.”
With greater reliance on geographic information systems, such as Google Earth or other satellite imaging systems, comes increased risk of sophisticated deepfake spoofing. In essence, bad actors could potentially use advanced AI techniques to create false geographic features, even entire fake towns or military build-ups, that would be nearly indistinguishable from the real thing.
To study how deepfake satellite images could be created, Zhao and his team from the University of Washington used a popular deepfake machine learning technique called Cycle Generative Adversarial Network, or CycleGAN.
Unlike other Generative Adversarial Network models, CycleGAN allows for the automatic training of image-to-image translation models without paired examples. The models used a deep convolutional neural network trained in an unsupervised manner using a collection of images from the source and target domain that do not need to be related to each other.
CycleGAN allows for the development of translation models in instances when no training datasets exist. Popular image filters that can map the features of a human face onto a cat are an example of this type of machine-learning technique.
Researchers used satellite images from Seattle, Washington, and Beijing, China, to explore how AI could use geographic features and urban structures to produce new deepfake images on a base map of Tacoma, Washington.
As co-author of the study, University of Washington Ph.D. candidate Chunxue Xu notes, “It is difficult to quantify geographic features with a certain character or pattern, especially taking spatial variability and heterogeneity into account. Landscape exhibits various patterns and processes in different scales.”
However, researchers found that CycleGAN could extract some of the available features from the spatial distribution of city structures of Seattle and Beijing to create a highly realistic deepfake version of Tacoma.
“The untrained eye may have difficulty detecting the differences between real and fake. A casual viewer might attribute the colors and shadows simply too poor image quality,” the researchers point out.
“Some simulated satellite imagery can serve a purpose,” says Zhao. “Especially when representing geographic areas over periods of time to, say, understand urban sprawl or climate change.”
Researchers note that one positive benefit from deepfake geospatial imaging could be instances when no images for a specific time frame exist for a location. Creating new images based on existing ones could help fill in the gaps and help provide perspective on how a region has changed over time.
Ultimately, researchers say the goal of their study wasn’t to prove that geospatial can be falsified. Instead, the authors hope that by learning how deepfake geography can be produced it will lead to the development of techniques to detect false images and data literacy tools for public benefit.
Researchers say they are now examining the more technical aspects of false geospatial image processing, “such as color histograms and frequency, and spatial domains” to better identify deepfake geography.
“As technology continues to evolve, this study aims to encourage more holistic understanding of geographic data and information so that we can demystify the question of absolute reliability of satellite images or other geospatial data,” said Zhao in a press release. “We also want to develop more future-oriented thinking in order to take countermeasures such as fact-checking when necessary.”