Disinformation, Covert Influence, and Cyber-Espionage Still a Major Problem on Social Media, Latest Meta Report Reveals

Meta, the parent company of Facebook, Instagram, and WhatsApp, released its 2023 first-quarter Adversarial Threats Report, offering the latest insights on how bad actors use social media for malicious activities. 

The report focuses on three key adversarial trends observed by Meta’s security teams during the first three months of 2023. These include malware campaigns, covert influence operations, and cyber espionage by state-sponsored adversarial threat networks and private for-hire disinformation firms. 

Meta’s Chief Information Security Officer, Guy Rosen, discussing these latest threats, said malware operators have taken a keen interest in exploiting public interest in emerging Artificial Intelligence (AI) technologies like ChatGPT

“Our threat research has shown time and again that malware operators, just like spammers, are very attuned to what’s trendy at any given moment. They latch onto hot-button issues and popular topics to get people’s attention,” Rosen said

“The latest wave of malware campaigns have taken notice of generative AI technology that’s captured people’s imagination and excitement.” 

In March alone, Meta security teams uncovered 10 malware families posing as ChatGPT or similar AI chatbot tools, and over 1,000 malicious URLs had been blocked on Meta’s family of social media apps. 

“Some of these malicious extensions did include working ChatGPT functionality alongside the malware,” Rosen said. “This was likely to avoid suspicion from the stores and from users.” 

The report shared findings on nine adversarial networks using social media for cyber espionage and covert influence operations. 

120 Facebook and Instagram accounts were disabled after being linked to a state-sponsored advanced persistent threat (APT) group operating out of Pakistan. 

According to the Meta, the Pakistan-based APT group was engaged in coordinated cyber espionage operations, primarily targeting military personnel in India and the Pakistan Air Force. 

Using fictitious social media personas, the Pakistan-based APT group posed as recruiters for legitimate and fake defense companies, military personnel, journalists, and women looking to make a romantic connection to build trust with their intended targets. 

The group’s ultimate goal was targeting victims with a type of malware known as GravityRAT. Described as a “low-sophistication” spyware program, security experts say GravityRAT has been used to target members of the Indian armed forces since at least 2015. 

Another 110 Facebook and Instagram accounts were disabled after they were found to be linked with a cyber espionage operation run out of South Asia by a hacking group known as Bahamut APT

Meta says Bahamut APT hackers posed as tech recruiters at large tech companies, journalists, students, and activists to trick targets into sharing sensitive information or installing malware on Android mobile devices. 

Bahamut APT’s primary targets included military personnel, government employees, and activists in Pakistan and India, including disputed areas in the Kashmir region. 

Finally, Meta says it disabled 50 Facebook and Instagram accounts associated with the Indian-based hacking group Patchwork APT. 

Patchwork APT used fake personas, posing as journalists in the United Kingdom and United Arab Emirates, to trick targets into clicking on malicious links or downloading malicious apps that would give hackers access to a victim’s computer or mobile devices. 

The primary targets of Patchwork APT were military personnel, activists, and minority groups in Pakistan, India, Bangladesh, Sri Lanka, the Tibet region, and China. 

Meta said it also disrupted six major networks engaged in operations to covertly influence public opinion, which Meta terms “coordinated inauthentic behavior.”

In one example, 40 Facebook accounts, eight pages, and one group were found to be linked with an Iranian covert influence operation targeting Israel, Bahrain, and France. 

The group used its fake profiles to distribute allegedly damaging information supposedly hacked from various government agencies, educational institutions, logistics and transport companies, or news outlets. 

“We cannot confirm if any of the claimed attacks against these entities have, in fact, occurred. We removed this network before it was able to gain a following among authentic communities on our platforms,” wrote Meta. 

Another 153 accounts, 79 pages, and 37 groups were removed after being discovered as part of two large-scale Chinese covert influence operations. 

Through an extensive fictitious presence across social media and the internet, the groups disparaged Uyghur activists and critics of the Chinese state and shared misinformation to influence public geopolitical views favorable to China. 

One of the two groups additionally tried to hire “part-time” protesters for various causes, such as the alleged discharge of nuclear waste from Fukushima in Japan and protests in Budapest against George Soros. One group spent $74,000 in ads on Facebook and Instagram. 

Meta says one of the groups showed similarities to another Chinese-based influence group that was discovered and disabled in September 2022. Specifically, the group was almost exclusively active from 9am-5pm, Monday to Friday, with a dip in activity during lunchtime. 

Security analysts said the other group is believed to be linked to the Chinese IT company, Xi’an Tianwendian Network Technology. 

In one of the more interesting highlights from the recent Adversarial Threats Report, Meta called out an apparent American-based misinformation-for-hire firm. 

According to Meta, Predictvia, a business registered in Miami, Florida, posed as news media outlets, journalists, and lifestyle brands on 28 fake Facebook and Instagram accounts and 54 Facebook pages as part of a covert political influence operation targeting elections in Guatemala and Honduras. 

The accounts shared memes and long-and-short-form text posts in Spanish criticizing the mayor of the Guatemalan city of San Juan Sacatepéquez, Juan Carlos Pellecer, and the alleged political corruption of the president of the Honduran Congress, Luis Redondo. 

Ironically, a website attributed to the company says, “Predictvia is in [sic] the front line of the fight against misinformation” and claims its platform “monitors and controls coordinated efforts to manipulate public discourse through fake social media accounts and other digital assets.”

Meta says it has banned Predictvia from using its services and issued the firm a Cease and Desist letter. 

Half of the covert influence operations highlighted in the Meta report were linked to private entities, including a political marketing consultancy in Togo called the Groupe Panafricain pour le Commerce et l’Investissement (GPCI). 

Meta notes that its 1st quarter Adversarial Threats Report does not represent all the adversarial threats security teams discovered. Instead, it is meant to highlight some key trends analysts detected. 

“This report is not meant to reflect the entirety of our security enforcements, but to share notable trends and investigations to help inform our community’s understanding of the evolving security threats we see,” the Meta report reads. 

Independent security experts say the amount of disinformation, covert influence, and cyber espionage operations not being detected by internal social media security teams is likely far higher than those being caught. 

In an October 2022 article in The Conversation, three academic experts on social media gave Meta’s current handling of misinformation a letter grade of C, B-, and C. 

“One important consideration: Users are not constrained to using just one platform. One company’s intervention may backfire and promote cross-platform diffusion of misinformation,” said Dr. Dam Hee Kim, an assistant professor of communication at the University of Arizona. 

“Major social media platforms may need to coordinate efforts to combat misinformation.” 

Meta’s significant stake in the social media ecosphere is Facebook and Instagram. However, the company notes that nearly every detected covert influence operation used coordinated fictitious entities across almost every corner of the internet, including on Twitter, Telegram, YouTube, Medium, TikTok, Blogspot, Reddit, and had their own website domains. 

“We’ve seen that our and industry’s efforts are forcing threat actors to rapidly evolve their tactics in attempts to evade detection and enable persistence,” said Rosen. “One way they do this is by spreading across as many platforms as they can to protect against enforcement by any one service.” 

“When bad actors count on us to work in silos while they target people far and wide across the internet, we need to work together as an industry to protect people.” 

Tim McMillan is a retired law enforcement executive, investigative reporter and co-founder of The Debrief. His writing typically focuses on defense, national security, the Intelligence Community and topics related to psychology. You can follow Tim on Twitter: @LtTimMcMillan.  Tim can be reached by email: or through encrypted email: