In an era when nearly everything we see online is filtered, tuned, and optimized for our personal tastes, the information we don’t encounter may matter just as much as what appears on our screens.
According to a new study, the same algorithms designed to keep us clicking may also quietly reshape how we learn, what we believe, and how confident we feel about both—even when we’re wrong.
A team of cognitive scientists from Vanderbilt University and The Ohio State University has found that algorithmic personalization doesn’t just narrow the information people encounter. It also biases how they explore, distorts the mental categories they form, and leads to startling levels of overconfidence about incorrect judgments.
The findings, published in the Journal of Experimental Psychology: General, show that even in a simplified learning environment stripped of politics, emotions, and real-world semantics, personalization alone is enough to warp human understanding.
“Our results show that learners in personalized environments sample feature information more selectively during the learning phase and develop inaccurate representations about the categories,” the researchers write. “Critically, they also report inflated confidence about their inaccurate decisions for categories for which they had little exposure.”
Algorithmic Curation, Human Distortion
Personalization is now ubiquitous—from Netflix queues and social media feeds to targeted shopping recommendations and automated news lists. These systems work by analyzing what people click or watch, then serving more of the same.
Researchers note that while personalization “helps users receive the information…that matter most to them,” it may also “lead to a severely distorted impression of reality” when diversity of exposure collapses.
To test how such distortion might play out at the level of basic cognition, the researchers built a controlled learning environment free from politics or emotion.
Instead of news headlines or videos, participants learned to categorize fictional “alien” creatures defined by six visual features, such as shape, curvature, orientation, and brightness. Over time, participants could uncover these features—one click at a time—to infer which alien belonged to which category.
However, in a twist, some participants saw a balanced set of aliens, while others encountered a personalized sequence generated by a recommendation algorithm modeled after YouTube’s.
Just like real-world platforms, the algorithm learned which features each participant tended to sample, then served more items that would maximize their “engagement,” or the likelihood of clicking more features. Over time, the algorithm funneled participants toward narrower slices of the alien world.
And that’s where things began to break.
Personalization Algorithms Shrink Exploration
Participants in personalized conditions gradually sampled fewer features and explored a narrower subset of the alien categories.
“Learners in personalized environments sampled information more selectively than those in the control environments,” researchers write. The result shows that personalization of category learning sequences can guide learners to limit the amount and diversity of information learners access.”
In contrast, participants allowed to choose freely what to study—without algorithmic influence—tended to sample more widely and evenly across all six features.
The effect was subtle but powerful. Because participants were exposed to fewer categories and fewer dimensions, they began forming distorted internal models of the alien world.
Like a movie watcher who stumbles into a single genre and is then algorithmically trapped there, participants learned more about what the system kept showing them—and far less about everything it didn’t.
Distorted Categories, Confident Mistakes
Things became even more interesting after the learning phase, when participants were tested on their ability to categorize new aliens and asked to rate their confidence in each choice.
Even when participants in personalized conditions were shown aliens from categories they had never seen before, they rarely selected the “novel category” response. Instead, they confidently—and incorrectly—mapped unfamiliar aliens onto categories they had seen.
“Participants who learned a limited subset of categories are likely to be overconfident when they attempt to categorize unfamiliar items,” the researchers write.
In fact, participants often felt more confident when they encountered aliens from categories they had never seen before—a striking metacognitive failure.
The study reports that when a test item came from a completely unfamiliar category, participants’ confidence actually increased even as their accuracy dropped sharply. In other words, they were most certain precisely when they had the least evidence to justify it.
These findings suggest personalization doesn’t just narrow our exposure. It may make us believe that our incomplete and biased knowledge applies more broadly than it does.
“A broad application of one’s limited and biased categorical knowledge can be problematic for our society because it can result in stereotypical thinking and conceptual biases,” the researchers warn.
A Filter Bubble for Thought Itself
Importantly, this distortion emerged even in the absence of emotionally charged topics, political content, or preexisting beliefs. The aliens were synthetic. Participants had no prior associations or biases. The only difference was whether they saw a balanced world or an algorithmically curated one.
That makes the implications particularly concerning. If personalization alone can distort human category formation and inflate confidence in wrong answers—even in a neutral artificial domain—its effects in real-world environments full of social, political, and cultural meaning may be far more dramatic.
“The relationship between the personalization algorithm and learners interacting with it is not one-directional but rather interactive, and that personalization can contribute to the development of initial biases in one’s belief system,” the researchers argue.
In other words, algorithms shape human learning, and human behavior shapes the algorithm, locking both into a cycle that limits exploration and magnifies misperception.
The study findings touch on a growing concern that systems optimized for engagement—not understanding—may influence everything from political polarization, operational decision-making, to national security.
Military analysts increasingly rely on algorithmic filtering tools. Intelligence systems prioritize data based on inferred user needs. Even scientific literature recommendations are personalized. The new study suggests that unless carefully managed, such systems may shrink the mental search space and embed false confidence.
From aerospace engineering to battlefield intelligence, mistaken certainty can be far more dangerous than honest uncertainty.
Ultimately, personalization is not inherently harmful. It can help people navigate information overload. However, this new study suggests that when algorithms constrain what we see—without us realizing it—they may also constrict how we think.
Moreover, in a world increasingly shaped by algorithms, understanding how these systems shape us may be one of the most important challenges societies will face.
“If you have a young kid genuinely trying to learn about the world, and they’re interacting with algorithms online that prioritize getting users to consume more content, what is going to happen?” co-author and professor of psychology at Ohio State, Dr. Brandon Turner, said in a statement. “Consuming similar content is often not aligned with learning. This can cause problems for users and ultimately for society.”
Tim McMillan is a retired law enforcement executive, investigative reporter and co-founder of The Debrief. His writing typically focuses on defense, national security, the Intelligence Community and topics related to psychology. You can follow Tim on Twitter: @LtTimMcMillan. Tim can be reached by email: tim@thedebrief.org or through encrypted email: LtTimMcMillan@protonmail.com
