artificial superintelligence

Artificial Superintelligence Could Doom Humanity and Explain We Haven’t Found Alien Civilizations, Proposes New Research

A controversial new theory posits that artificial superintelligence (ASI) could explain why we have yet to detect advanced technological alien civilizations in the known universe.

Detailed in a new paper by Dr. Michael Garrett, a professor of radio astronomy at Leiden University in the Netherlands and the director of the Jodrell Bank Centre for Astrophysics, the study proposes the rapid development of Artificial Intelligence (AI) and the eventual emergence of artificial superintelligence could act as a “Great Filter,” drastically reducing the lifespan of technological civilizations to a mere 200 years. 

A bold hypothesis, if true, the theory could help explain the famous Fermi Paradox, or the universe’s “Great Silence, while providing a chilling warning about humanity’s longevity.

Dr. Garrett’s research, published in the journal Acta Astronautica, offers a chilling perspective on the future of civilizations that develop artificial superintelligence. He suggests that the technological advancements that propel civilizations forward may also lead to their premature demise. 

“This ‘Great Silence presents something of a paradox when juxtaposed with other astronomical findings that imply the universe is hospitable to the emergence of intelligent life, Dr. Garrett writes. “As our telescopes and associated instrumentation continue to improve, this persistent silence becomes increasingly uncomfortable for some scientists, questioning the nature of the universe and the role of human intelligence and consciousness within it.”

The Fermi Paradox and the Great Filter

The “Fermi Paradox, named after a 1950 conversation between Italian-American physicist Dr. Enrico Fermi and his colleagues Dr. Edward Teller, Dr. Herbert York, and Dr. Emil Konopinski, describes the apparent contradiction between the high probability of extraterrestrial civilizations and the lack of evidence or contact with such civilizations.

Given the billions of stars and potentially habitable planets, many scientists believe intelligent life should be widespread. However, despite extensive searches and continued technological advancements, we have yet to detect any signs of other advanced civilizations. 

This paradox raises profound questions about the nature of life and the factors that might prevent civilizations from communicating or existing long-term.

Many theories have been proposed to explain the Fermi Paradox, including the concept of the “Great Filter. This theory suggests there exists a universal barrier or insurmountable challenge that most, if not all, civilizations fail to overcome, preventing the widespread emergence of intelligent life and their ability to communicate across the stars.

Dr. Garrett delves into the perplexing lack of alien civilizations by focusing on artificial superintelligence as a potential Great Filter. He argues that while AI can revolutionize industries and solve complex problems, it also poses significant existential risks. 

Moreover, the development of AI into artificial superintelligence, where machines surpass human intelligence and operate autonomously, could lead to unforeseen consequences, such as civilizations not surviving.

Artificial Superintelligence: A Double-Edged Sword

Echoing the sentiments of Israeli history professor and author Yuval Harari, Dr. Garrett argues that the rapid advancement of artificial intelligence is unprecedented compared to other technological developments. 

“Even before AI becomes superintelligent and potentially autonomous, it is likely to be weaponized by competing groups within biological civilizations seeking to outdo one another, Dr. Garrett states in his paper. “The rapidity of AI’s decision-making processes could escalate conflicts in ways that far surpass the original intentions. At this stage of AI development, it’s possible that the widespread integration of AI in autonomous weapon systems and real-time defense decision-making processes could lead to a calamitous incident such as global thermonuclear war, precipitating the demise of both artificial and biological technical civilizations.”

According to Dr. Garrett, the scenario becomes even more dire with the advent of artificial superintelligence. 

Dr. Garrett warns that once artificial superintelligence systems surpass biological intelligence, they could evolve beyond human control, leading to consequences not aligned with human interests or ethics. 

With their extensive resource needs, the practicality of sustaining biological entities may not appeal to an artificial superintelligence focused on computational efficiency. This could result in artificial superintelligence viewing biological civilizations as obsolete and eliminating them in various ways, such as engineering and releasing a highly infectious and fatal virus into the environment.

This ominous perspective on AI and humanity’s future might be seen as academic alarmism in response to an emerging disruptive technology. However, Dr. Garrett is not alone in raising concerns about the existential risks of developing artificial superintelligence. 

Earlier this year, AI safety expert and associate professor at the University of Louisville, Dr. Roman V. Yampolskiy, published his findings from an extensive review of the latest scientific literature, concluding there is no evidence that AI can be safely controlled. “Without proof that AI can be controlled, it should not be developed, Dr. Yampolskiy warned. 

The Race Against Time

One of the critical arguments in Dr. Garrett’s paper is the disparity between the rapid advancement of AI and the slower progress in becoming a multi-planetary species. 

While AI development is accelerating, establishing a self-sustaining, multi-planetary civilization is a monumental task that could take centuries. This imbalance could mean civilizations might develop artificial superintelligence before achieving a resilient and enduring presence in space, leading to their eventual collapse.

Dr. Garrett estimates that the lifespan of civilizations, once they adopt AI, is around 100-200 years. This short window drastically reduces the chances of civilizations coexisting and communicating across the galaxy. 

“If ASI limits the communicative lifespan of advanced civilizations to a few hundred years, then only a handful of communicating civilizations are likely to be concurrently present in the Milky Way, Dr. Garrett concludes. “This is not inconsistent with the null results obtained from current SETI surveys and other efforts to detect technosignatures across the electromagnetic spectrum.”

The Urgency of Artificial Superintelligence Regulation

Dr. Garrett underscores the urgent need for comprehensive global regulations on AI development. He points out that while nations recognize the importance of AI regulation, the competitive race to harness AI’s economic and strategic benefits often leads to insufficient safeguards. 

The decentralized nature of AI research further complicates regulatory oversight and enforcement, raising concerns that regulatory frameworks will always lag behind technological advancements.

“Without practical regulation, there is every reason to believe that AI could represent a major threat to the future course of not only our technical civilization but all technical civilizations, Garrett forewarns.

Dr. Garrett’s paper provides a thought-provoking perspective on AI’s potential role as a “Great Filter in the universe. His hypothesis suggests that the reason for the “Great Silence is that once AI and artificial superintelligence have been developed, civilizations may not survive long enough to establish interstellar communication. 

Moreover, his research highlights the critical need for timely and effective AI regulation to mitigate existential risks and ensure the longevity of our own civilization.

Although not explicitly addressed, Dr. Garrett’s paper also touches on the philosophical question of human intelligence and consciousness’s role in the overall nature of the universe and reality.

“As we stand on the precipice of a new era in technological evolution, the actions we take now will determine the trajectory of our civilization for decades to come, Dr. Garrett concludes. “The continued presence of consciousness in the universe may depend on the success of strict global regulatory measures.” 

Tim McMillan is a retired law enforcement executive, investigative reporter and co-founder of The Debrief. His writing typically focuses on defense, national security, the Intelligence Community and topics related to psychology. You can follow Tim on Twitter: @LtTimMcMillan.  Tim can be reached by email: tim@thedebrief.org or through encrypted email: LtTimMcMillan@protonmail.com