On museum walls, in textbooks, and increasingly across social media feeds, artificial intelligence (AI) is breathing life into humanity’s distant ancestors. With a few prompts, AI can conjure scenes of Neanderthals hunting mammoths, tending fires, or raising families beneath icy skies.
However, according to new research, these vivid reconstructions may tell us less about Neanderthals—and more about the outdated assumptions embedded in AI itself.
A new empirical study published in Advances in Archaeological Practice reveals that generative AI systems tasked with recreating prehistoric life commonly rely on scientific ideas that are decades out of date, producing images and narratives that distort modern archaeological knowledge and perceptions of history.
The findings raise wider concerns about how AI could affect public knowledge of human history, potentially reinforcing misconceptions at scale. Ultimately, the implications extend far beyond archaeology, offering a glimpse of how artificial intelligence may reshape collective memory—and how easily it can mislead.
“We present a case study examining Neanderthal behavior, juxtaposing published archaeological knowledge with images and text made using AI,” researchers write. “Our study reveals a low correspondence between scientific literature and artificially intelligent material, which reflects dated knowledge and cultural anachronisms.”
Artificial intelligence is rapidly transforming disciplines from medicine to art. Archaeology, too, has begun to explore its potential, using machine learning to identify ancient sites, analyze artifacts, and reconstruct past environments. However, generative AI—tools that create entirely new images or text—delivers a new and potentially disruptive capability to reinvent the past.
To investigate how accurately AI represents prehistory, researchers Dr. Matthew Magnani of the University of Maine and Dr. Jon Clindaniel of the University of Chicago conducted a large-scale comparison of AI-generated content with more than a century of scientific research on Neanderthals.
Using OpenAI’s DALL-E 3 image generator and ChatGPT text model, researchers produced hundreds of AI-generated depictions of Neanderthal daily life. They then compared those outputs to 2,063 scholarly abstracts published between 1923 and 2023, drawn from major scientific databases.
The goal was to measure how closely AI-generated content aligned with contemporary scientific understanding—or whether it reflected older, outdated ideas.
Researchers discovered that AI-generated text descriptions of Neanderthals most closely resembled scientific literature from the early 1960s, while AI-generated images aligned more closely with scholarship from the late 1980s and early 1990s. This temporal gap means that AI depictions often lag decades behind modern archaeological research.
In practical terms, that lag shows up in how Neanderthals look and behave. The study found that AI frequently portrayed them using physical features and lifestyles that scientists had long rejected.
Many images depicted hunched, heavily furred figures with exaggerated, primitive features—more reminiscent of early 20th-century stereotypes than of modern reconstructions.
Some scenes also included technologies that Neanderthals never possessed, such as glass vessels, metal tools, or architectural features that would not appear in human history for tens of thousands of years.
Researchers say these anachronisms reveal a troubling pattern. Rather than synthesizing the latest science, AI often recombines fragments of older cultural narratives, outdated textbooks, and popular imagery.
And the problem is not just visual—it’s structural.
Generative AI systems are trained on massive datasets drawn from across the internet, including books, articles, and websites. But not all information is equally represented.
Older materials are often more accessible, particularly those in the public domain. Meanwhile, newer academic research is frequently locked behind paywalls, limiting its influence on AI training data. As a result, AI may unintentionally amplify obsolete ideas.
“Although the source information used to train generative AI is opaque—not least because of the misuse of copyrighted materials by large companies—it can be assumed that the availability of knowledge will shape the AI outputs to skew toward older, more visible texts or publicly available information on websites that is more accessible to crawlers but that on average might reflect older information,” researchers note.
In other words, AI doesn’t necessarily learn from the past—it may get stuck in it.
Beyond scientific accuracy, the study uncovered deeper cultural biases. AI-generated scenes overwhelmingly focused on muscular male hunters, often sidelining women and children.
This imbalance mirrors long-standing biases in earlier scientific illustrations and popular media, which emphasized male dominance while overlooking the broader social complexity of prehistoric life.
Modern archaeology, by contrast, recognizes that Neanderthal societies likely included diverse roles, cooperative childcare, and sophisticated cultural practices.
Yet AI often fails to capture that nuance. Instead, it reinforces simplified narratives shaped by decades-old assumptions.
At first glance, inaccurate depictions of Neanderthals might seem like a minor issue. But the researchers argue that the stakes are much higher. AI is increasingly shaping how people learn about science, history, and culture.
Students use AI to complete assignments. Educators use it to create visual teaching materials. Social media platforms amplify AI-generated imagery to millions of users.
If those outputs are inaccurate, they could quietly reshape public understanding of human history. Over time, AI-generated misinformation could become indistinguishable from fact.
“These discrepancies may result from the types of data being used to train generative AI programs, which we expect reflect broader social biases distributed throughout bodies of writing and source images,” researchers write. “Reproduction of these biases risks their continued propagation and normalization.”
Despite these concerns, researchers emphasize that AI still holds enormous promise. Machine learning has already revolutionized archaeological discovery, helping identify lost cities, track looting, and decode ancient artifacts.
Generative AI could one day help visualize ancient history with unprecedented accuracy—if properly trained and guided. However, achieving that future will require careful attention to data quality and transparency.
The study highlights how access to information—particularly open-access research—may play a crucial role in shaping AI’s accuracy. If modern scientific findings remain inaccessible, AI may continue to rely on outdated knowledge.
In that sense, the future of artificial intelligence may depend on something surprisingly human: how we choose to share our knowledge.
For now, AI’s vision of our prehistoric ancestors remains a patchwork of outdated science, cultural stereotypes, and creative invention. It can recreate Neanderthals with stunning realism. However, realism is not the same as accuracy.
As artificial intelligence becomes more deeply woven into society, the question is no longer whether machines will help tell humanity’s story. It’s whether they will tell it correctly.
“Our current research suggests that the way we structure and make information available will directly influence AI output and, by extension, the way we imagine the past,” researchers conclude. “ If articles from certain eras or from specific subfields are more likely behind paywalls, they will be less likely to feed into materials generated using AI. Academic publishing practices may ultimately work to undermine public knowledge.”
Tim McMillan is a retired law enforcement executive, investigative reporter and co-founder of The Debrief. His writing typically focuses on defense, national security, the Intelligence Community and topics related to psychology. You can follow Tim on Twitter: @LtTimMcMillan. Tim can be reached by email: tim@thedebrief.org or through encrypted email: LtTimMcMillan@protonmail.com
