content removal
(Unsplash)

New Survey Reveals Differences Along Political Lines on Social Media Content Removal

Misinformation and disinformation can potentially be used to influence voters and elections, according to a recent survey that reveals how partisan interests and biases influence the debate over the content removal on social media sites. 

In recent years, North American social media companies have faced growing demands to eliminate misinformation from their platforms. Yet differences along partisan lines regarding what constitutes misinformation have impeded efforts to combat this issue, particularly in the United States. 

With a growing worldwide understanding of how social media influences the average person, it comes as no surprise political controversy over false or misleading headlines has come under fire with the U.S. political elections right around the corner. 

A recent survey by Ruth Appel from Standford University and Margaret Roberts from the University of California, San Diego, highlights the significance of partisan motivations and preferences in shaping the discussion around content removal on social media platforms. 

Partisan motivations refer to the underlying political or ideological biases and preferences individuals have based on their affiliation with a political party. This incentive can influence a person’s beliefs, actions, and decisions in ways that align with the goals and values of their own political party. 

The survey analyzed the perspectives of Democrats and Republicans on misleading partisan headlines on social media. According to its findings, Democrats were twice as likely to support the removal of false content. In contrast, Republicans were twice as likely to perceive such removal as censorship.

The areas of disagreement mainly focus on explaining a “fact gap” and differences in perceptions about what contributes to misinformation. The study abstract argues that partisan differences could also be due to “party promotions” (a desire to leave misinformation online that promotes one’s own party) or a “preference gap (differences in internalized preferences about whether misinformation should be removed). 

Researching the Politics of Content Removal  

During the summer of 2021, Ruth Appel, a Stanford University Researcher, conducted a pre-registered survey of 1,120 U.S. participants. The study included 673 Democrats and 447 Republicans to analyze if partisan motivation or party favoritism influenced their views on online content moderation. 

During the experiment, participants were exposed to false headlines that either supported their own political party or the opposing party. At the beginning of the study, participants were shown 18 false headlines and informed that the headlines were indeed false. The headlines included nine with pro-Democrat views and nine with pro-Republican views. Appel then asked each participant if they believed the headline to be true, whether it should be removed, and if the removal of the headline meant censorship. The final question asked if they would report the headline as being harmful content. 

In general, Democrats are more inclined to advocate for the removal of false headlines, with a 0.69 probability, in contrast to Republicans, who express a 0.34 probability for the same stance. Additionally, Democrats exhibit a 0.49 probability of considering false headlines as harmful and reporting them as such, while Republicans show a lower 0.27 probability for this perspective. Interestingly, Democrats are less likely to perceive the removal of false headlines as censorship, with a 0.29 probability, whereas Republicans have a significantly higher probability of 0.65 for perceiving it as censorship.

In the end, the researchers at Stanford University found evidence of preferable party promotion and differences in choices between both parties. Democrats were generally twice as likely as Republicans to think false content should be removed, even if the information being removed appeared to favor their party. Conversely, Republicans, when acknowledging content falsehood, were twice as inclined to view content removal as an act of censorship.

The research team noted in their press release, “Americans seem to have diverging preferences about the concept of content removal and whether the protection of free speech necessitates or precludes the moderation of content.”  

Misinformation vs. Disinformation: Why Political Ads Are Allowed to Run Unverified Claims

Misinformation on social media platforms has emerged as a significant concern in recent years. Yet Democrats and Republicans in the United States hold contrasting views on the content that should be subject to moderation. The American Psychological Association (APR) identifies misinformation as false or inaccurate information—in other words, information that gets the facts wrong but does not knowingly present bad information. By contrast, disinformation is false information that is deliberately intended to mislead and intentionally misstate the facts.

Taking a deeper look at the political side of misleading social media content, it is evident that politicians have used the First Amendment to their advantage during election cycles, and often in ways that are conducive to the spread of misleading information.

In 2019, The New York Times reported that Facebook was receiving pressure to continue its policy of refraining from fact-checking any political advertisements, even when they might include incorrect information, in recognition of the First Amendment.

The company, while expressing its commitment to its policies, nonetheless said it had been looking at “different ways we might refine our approach to political ads,” according to a statement provided by a Facebook spokesperson to the Times. 

It comes as no surprise that content moderation disputes often seem to result from differences in partisan beliefs. Still, the new research highlights more precisely how these conflicts may also be influenced by partisan biases, and the motivation to champion one’s own party while criticizing the opposition. 

The new paper, “Partisan conflict over content moderation is more than disagreement about facts,” appeared in Science Advances on November 3, 2o23.

Chrissy Newton is a PR professional and founder of VOCAB Communications. She hosts the Rebelliously Curious podcast, which can be found on The Debrief’s YouTube Channel. Follow her on X: @ChrissyNewton Or chrissynewton.com.