For years, it has been questioned whether the algorithms behind social media feeds simply reflect our existing beliefs, likes, and interests—or actively shape them. Now, a large-scale experimental study suggests the answer may be far more consequential than previously demonstrated.
In a controlled experiment involving nearly 5,000 active users of X, the platform formerly known as Twitter, researchers found that exposure to the platform’s algorithmically curated “For You” feed significantly shifted political attitudes toward more conservative positions.
Intriguingly, these changes persisted after users returned to a traditional chronological feed, suggesting that algorithmic influence may leave a lasting imprint on political perspectives.
Published in Nature, the findings offer some of the strongest empirical evidence to date that social media algorithms can do more than simply organize information. Rather, they can also subtly reshape the political environment users inhabit and the opinions they hold.
“We assigned active US-based users randomly to either an algorithmic or a chronological feed for 7 weeks, measuring political attitudes and online behaviour,” researchers wrote. “Switching from a chronological to an algorithmic feed increased engagement and shifted political opinion towards more conservative positions, particularly regarding policy priorities, perceptions of criminal investigations into Donald Trump, and views on the war in Ukraine.”
The study comes amid growing global concern about the role of social media in shaping public opinion, elections, and political polarization. While previous experiments—most notably collaborations with Meta—found little evidence that algorithms significantly altered political attitudes, those studies could not fully disentangle whether earlier exposure had already influenced users.
This new experiment sought to answer that question directly.
The international research team conducted a randomized field experiment during the summer of 2023 involving 4,965 active X users in the United States. Participants were recruited through the YouGov survey platform and agreed to use either X’s algorithmic “For You” feed or its chronological “Following” feed for approximately seven weeks.
Unlike chronological feeds, which display posts from accounts users follow in time order, algorithmic feeds actively curate and prioritize content. They also often introduce posts from accounts users do not follow, based on predicted relevance and engagement potential.
Participants completed detailed surveys before and after the experiment to measure changes in political attitudes, policy priorities, and perceptions of current events. Researchers also collected behavioral data, including which accounts participants followed and the content appearing in their feeds.
The results revealed a clear pattern. When users switched from chronological to algorithmic feeds, they became more politically engaged and were more likely to adopt conservative policy priorities.
Specifically, those exposed to the algorithm were more likely to prioritize issues typically emphasized by Republican voters, such as immigration, inflation, and crime. They were also more likely to view criminal investigations into former President Donald Trump as unacceptable and expressed less favorable views toward Ukrainian President Volodymyr Zelensky.
By contrast, switching users in the opposite direction—from algorithmic to chronological feeds—produced little change in political attitudes. This asymmetry suggests that initial algorithmic exposure may have lasting effects.
“These results suggest that initial exposure to X’s algorithm has persistent effects on users’ current political attitudes and account-following behaviour, even in the absence of a detectable effect on partisanship,” researchers write.
To understand why the algorithm influenced attitudes, researchers examined the content users saw.
They found that X’s algorithm disproportionately promoted conservative and activist posts while reducing the visibility of content from traditional news outlets. This shift in content exposure influenced users’ behavior in measurable ways.
For example, users exposed to the algorithm were significantly more likely to follow conservative political activist accounts—connections they often maintained even after returning to chronological feeds.
This behavior created a feedback loop. Once users followed new accounts, those accounts continued to shape their information environment regardless of the feed type.
According to the findings, participants exposed to the algorithm showed measurable increases in following conservative accounts and political activist profiles compared with those using chronological feeds. This mechanism helps explain why the algorithm’s effects persisted beyond the experiment itself.
Interestingly, the study found that while the algorithm shifted political attitudes and policy priorities, it did not significantly change participants’ self-reported party identification or emotional polarization between Democrats and Republicans.
In other words, the algorithm appeared to influence specific political views and issue priorities without fundamentally changing how people identified politically.
At the same time, it made users more engaged with the platform. Participants assigned to the algorithmic feed were significantly more likely to maintain or increase their usage than those using a chronological feed. This finding aligns with broader industry knowledge that algorithmic feeds are designed to maximize user engagement.
From a platform perspective, this creates a powerful incentive structure. Algorithms that increase engagement may also shape political views—even unintentionally.
Unlike earlier studies conducted in partnership with social media companies, this latest research was conducted independently. Participants manually switched their feed settings, allowing researchers to observe real-world effects without platform intervention.
This independence strengthens the credibility of the findings by avoiding potential conflicts of interest.
The study also took place during a critical period in X’s history—after Elon Musk’s acquisition of Twitter but before his public endorsement of Donald Trump in July 2024.
Before acquiring the platform, Musk himself accused Twitter of exhibiting a liberal bias. However, the researchers note that previous studies found the platform’s algorithm already tended to prioritize right-wing content, suggesting that elements of its conservative tilt may have predated Musk’s ownership.
“An earlier study examined changes in the content of users’ feeds on Twitter, when the platform introduced the feed algorithm in 2016, well before Musk’s takeover, and found that the algorithm already prioritized right-wing content, despite different platform ownership,” researchers write.
The implications of the study extend far beyond X. Social media platforms have become a primary source of news for millions of people worldwide, and in the United States alone, roughly a quarter of adults now rely on them as their main gateway to current events.
If the algorithms behind those feeds can shift political priorities and perceptions, they could influence democratic processes in subtle, cumulative, and difficult-to-detect ways.
As previously covered by The Debrief, similar concerns have also emerged in research on artificial intelligence. Recent studies examining large language models—the same class of systems powering popular chatbots and AI assistants—have found measurable political bias in their outputs.
In contrast to the right-leaning amplification observed in X’s feed algorithm, those analyses have often found LLMs to lean left on many political and social issues, raising separate questions about how training data and optimization choices shape the information environments users encounter. Researchers warn that such biases in AI outputs could influence public understanding by framing political topics in ways that subtly favor particular viewpoints.
Those concerns are not limited to politics alone. As The Debrief also recently reported, a separate empirical study found that social media personalization algorithms may be quietly altering how the human brain learns.
Researchers discovered that when information is algorithmically optimized to match individual preferences, it can change how people explore new ideas, process feedback, and update their understanding of the world.
Over time, this personalized filtering may narrow learning pathways, reinforcing familiar perspectives while reducing exposure to information that challenges existing beliefs.
Together with the latest findings on X’s political effects, the research adds to growing evidence that algorithmic curation may influence not only what people think, but how they learn, adapt, and form their perceptions in the first place.
In essence, algorithms may quietly shape collective reality—not by forcing beliefs, but by altering the information landscape in which beliefs form. While the effects observed in this recent study were statistically modest at the individual level, their cumulative impact across millions of users could be substantial.
As governments and regulators worldwide grapple with the societal impact of social media, studies like this provide rare experimental evidence of algorithmic influence. They also raise difficult questions.
Should social media algorithms be regulated? Should users have more transparency and control over what they see? Perhaps most importantly: how much of our political worldview is truly our own—and how much is shaped by machines optimizing for engagement?
Ultimately, based on these recent findings, social media algorithms are not neutral mirrors of human thought. They are active participants in shaping it.
Researchers underscored the seriousness of the issue by acknowledging the ethical concerns surrounding their own manipulation of social media feeds during experiments, noting that the intervention may have an enduring influence on participants’ political views.
“By randomly assigning participants to different feed settings for seven weeks, we inevitably affected their content exposure and potentially their political views—an effect that was observed in the subset of participants initially using the chronological feed,” researchers write. “We were unable to fully mitigate these effects through debriefing after the experiment.”
“Given that the experiment involved sustained exposure to varying feed settings over a seven-week period and led to changes in the accounts followed by a subset of participants, any subsequent debriefing might not fully offset the potential cumulative effects of prolonged exposure to distinct information environments.”
Tim McMillan is a retired law enforcement executive, investigative reporter and co-founder of The Debrief. His writing typically focuses on defense, national security, the Intelligence Community and topics related to psychology. You can follow Tim on Twitter: @LtTimMcMillan. Tim can be reached by email: tim@thedebrief.org or through encrypted email: LtTimMcMillan@protonmail.com
