The Federal Bureau of Investigation (FBI) has issued a unique Private Industry Notification (PIN) on deepfakes, warning companies that “malicious actors almost certainly will leverage synthetic content for cyber and foreign influence operations in the next 12-18 months.”
The FBI’s grim warning comes at a time when cybersecurity and defense officials have been increasingly vocal about the dangers of synthetic media content, more commonly referred to as: “Deepfakes.”
In the PIN, the FBI warns they anticipate deepfakes “will be increasingly used by foreign and criminal cyber actors for spearphishing and social engineering in an evolution of cyber operational tradecraft.”
BACKGROUND: WHAT IS A DEEPFAKE?
Creating or manipulating images and videos to depict events that never actually happened is hardly new. However, advances in machine learning and artificial intelligence have allowed for the creation of compelling and nearly indistinguishable fake videos and images.
Legacy photo editing software uses various graphic editing techniques to alter, change, or enhance images. Photo editing software such as PhotoShop can manipulate pictures to include details or even people that weren’t originally in a photo. However, creating convincing false images is highly-dependent on a user’s skill in using the editing software.
In contrast, deepfakes use machine learning, and a type of neural network called an autoencoder. An encoder reduces an image to a lower-dimensional latent space, allowing for a decoder to reconstruct an image from the latent representation.
Because the latent or original image contains critical features, such as a person’s facial features and body posture, this allows for deepfakes to be decoded with a machine learning model trained for a specific target. Ultimately, the result is a persuasive and highly detailed superimposed representation of the original video or image’s underlying facial or body features.
The most often used type of deepfake processing attaches a machine learning generative adversarial network (GAN) to a decoder. The GAN trains a generator and discriminator in an adversarial relationship, resulting in extraordinarily compelling images that virtually mimic reality.
Recently, Belgium VFX specialist Chris Ume created a significant buzz when several compelling deepfake videos he made of actor Tom Cruise went viral on TikTok.
In an interview with The Verge, Ume said it took him two months to train the base AI models on Cruise’s footage to create the brief Tik Tok clips. Even then, Ume says he had to go through each video, frame-by-frame, to make minor adjustments to make the clips convincing. “The most difficult thing is making it look alive,” Ume told The Verge. “You can see it in the eyes when it’s not right.”
Because of the time and effort, it takes to make false videos look realistic, Ume says he doesn’t believe deepfakes are something the public should be too concerned about right now. The visual effects artist likens the modern AI manipulations to “Photoshop 20 years ago.”
ANALYSIS OF DEEPFAKE THREATS
While Ume takes a relatively positive outlook on deepfake technology, in the recently published PIN warning, the FBI takes a different tone, saying the potential for highly-sophisticated deepfakes software to sow disinformation and change a person’s view of reality is a genuine and serious imminent threat.
In the warning, the FBI notes that in 2020, numerous instances of Russian, Chinese, and Chinese-language actors were detected using deepfake profile images to make fake online social media accounts known as “sock-puppets.” These seemingly authentic accounts have been used by hostile governments to push propaganda and engage in social influence campaigns.
Highlighted in the PIN was a 2017 incident in which The Independent published an article that a fictitious “journalist” produced. According to the FBI, the use of deepfakes to develop a robust fake online presence and create fictitious “journalists” to generate content that can be unwittingly published and shared by various online and print media outlets will dramatically increase in the near future.
The FBI also warns of a “newly defined attack vector,” called “Business Identity Compromise (BIC), whereby malicious cyber actors will leverage synthetic media and deepfakes to commit attacks on the private sector.
The Bureau says bad actors will use deepfake tools to create “synthetic corporate personas” or impersonate existing employees, to commit attacks that will likely have “very significant financial and reputational impacts to victim businesses and organizations.”
OUTLOOK: THE FUTURE OF SYNTHETIC MEDIA
Most security analysts have been echoing similar warnings to the FBI’s recent PIN, with some saying coming advances in deepfake technology could “wreak havoc on society.” There are, however, some encouraging signs that advances in AI to detect deepfakes are equally improving.
Another study published in Nature just last week found that the vast majority of people actively seek not to share false information or “fake news.”
To guard against deepfakes, the FBI encourages using the: Stop, Investigate the source, Find trusted coverage, and Trace the original content when consuming information online, or “SIFT” methodology.
The PIN also provides some tips on visual clues to identify deepfakes, “such as distortions, warping, or inconsistencies in images and video.” The FBI gives some examples of where to look for these visual clues including, “consistent eye spacing and placement, noticeable glitches in head and torso movements, as well as syncing issues between face and lip movement, and any associated audio.”
The FBI concludes the recent PIN warning by encouraging anyone who wants to report suspicious or criminal cyber activity to contact the FBI by phone at (855) 292-3937 or by e-mail at CyWatch@fbi.gov.