Welcome to this week’s installment of The Intelligence Brief… recently, a pioneering technologist heralded as the “Godfather of artificial intelligence” has taken a big step back to warn us about the potential perils such technology might ensure in the years ahead. In our analysis, we’ll be looking at 1) what prompted Dr. Geoffrey Hinton to quit his job and warn us about A.I., 2) how other leading technologists have expressed similar concerns in recent weeks, and 3) whether competition in the field of A.I. is driving the technology in potentially dangerous directions.
Quote of the Week
“Right now, what we are doing is not a science but a kind of alchemy.”
– Eric Horvitz, Chief Scientific Officer, Microsoft
Latest Stories: Among the stories we’re covering this week at The Debrief include our latest feature, where the team of scientists at Applied Physics offered us their insights on how a famous UAP incident from 2006 could point to a kind of technology that may one day revolutionize space travel. Elsewhere, has one of the universe’s greatest mysteries finally been solved? You can find links to all our recent stories included at the end of this newsletter.
Podcasts: This week in podcasts from The Debrief, Stephanie Gerk and MJ Banias talk about the push for LGBTQ+ inclusion in the space industry, and Stephanie also muses over mysterious tracks left in Greenland on the latest installment of The Debrief Weekly Report. Meanwhile, this week on The Micah Hanks Program, after examining several cases involving purported health effects related to UAP, we take an epistemological look at unidentified aerial phenomena, asking, “what do we really know, and how do we know it?” You can subscribe to all of The Debrief’s podcasts, including audio editions of Rebelliously Curious, by heading over to our Podcasts Page.
Video News: Recently on Rebelliously Curious, Chrissy Newton sat down with CFO, professor, Bitcoin enthusiast, and psychedelics advocate Paul Hynek, the son of former scientific advisor to the U.S. Air Force’s UFO investigations J. Allen Hynek. Also, if you missed the first installment of our all-new series “Ask Dr. Chance,” be sure to check out the first episode, and episode two airing later this week. You can also watch past episodes and other great content from The Debrief on our official YouTube Channel.
With that all behind us, it’s time to look at the latest round of A.I. experts who have stepped forward with concerns about where this technology is headed, and what that could mean going forward.
A Warning From the Godfather of A.I.
This week, one of the leading pioneers in the burgeoning artificial intelligence revolution made headlines around the world. However, it had not been for his contributions to the advancement of this controversial technology that he received so much attention.
It was the fact that he had announced he was stepping away from it.
Geoffrey Hinton had been the man behind the technology that served as the intellectual basis for what companies like Google, OpenAI, and many others are now capitalizing on, as part of a project undertaken with two graduate students at the University of Toronto in 2012.
However, Hinton announced this week that he would be leaving Google, where he spent the last decade working to further the company’s efforts in the field of A.I., stating that the only way he could speak out about the potential dangers of the technology would be by stepping out as a free agent.
Hinton is now one of many leaders in the quickly advancing field who warn that the unforeseen consequences of artificial intelligence could one day come back to haunt us. But why are so many experts like him stepping forward now, and what seems to have been the breaking point for those who once supported A.I. research?
The Inevitability of Artificial Intelligence
“If I hadn’t done it, somebody else would have,” Hinton was quoted saying in The New York Times this week, although implying that such sentiments, in light of the potential dangers that A.I. could represent to humanity in the near future, seem to offer little consolation.
The widely read Times feature examined Hinton’s path from “A.I. groundbreaker to doomsayer,” showcasing how he and many others have called for a necessary step back—and perhaps a slower pace—than the current rate of development of artificial intelligence has recently seen.
In March, industry leaders that include Elon Musk, Apple co-founder Steve Wozniak, and leaders in the A.I. field published an open letter calling for a temporary pause on A.I. development.
“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs,” the authors of the March 22, 2023 letter wrote.
“As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources,” the letter reads.
“Unfortunately, this level of planning and management is not happening,” the authors added, “even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
Let’s take a look at that “out-of-control race” the authors of this letter warned us about, and what that could mean.
Is Competition Driving Artificial Intelligence Down a Dangerous Path?
In recent months we have seen not just an apparent quickening in the development of A.I. systems, but also an obvious escalation of events driven by competition. Just weeks after the release of OpenAI’s ChatGPT, Google announced beta testing would begin for its own new A.I. assistive technology Bard. This was immediately followed by OpenAI announcing that it planned to allow ChatGPT to have limited access to the World Wide Web, a move that will improve its performance but which some also cite as a potential means by which, one day, any truly autonomous artificial intelligence may be able to wreak havoc by gaining access to the Internet… and everything connected to it.
Are such sudden competitive advancements being thoroughly considered? Apparently not, according to many experts in the A.I. field. For Hinton, concerns like these had also been among those which prompted him to quit his job at Google.
However, Hinton isn’t warning us about potential artificial intelligence takeover scenarios… at least not in the near term. Rather, among his primary concerns right now are the potentials involving the proliferation of A.I.-produced content online, which could mean that people may “not be able to know what is true anymore.”
Also addressing such concerns recently had been Microsoft Corp. Chief Economist Michael Schwarz, who shared his own confidence that “AI will be used by bad actors, and yes it will cause real damage,” while speaking at a World Economic Forum panel in Geneva . “It can do a lot damage in the hands of spammers with elections and so on,” Schwarz added.
Experts like Hinton are, of course, also concerned about whether more dangerous possibilities exist too, like artificial intelligence choosing to carry out its own motives irrespective of humans or presenting existential threats to humanity—concerns that were initially thought to be decades away if they ever even come to fruition—which now could be looming much closer.
In their letter in March, technologists like Elon Musk and others wrote that in the past, humanity has chosen to hit the pause button on other technologies that could have potentially impacted society in negative ways.
“We can do so here,” the letter concludes. “Let’s enjoy a long AI summer, not rush unprepared into a fall.”
That concludes this week’s installment of The Intelligence Brief. You can read past editions of The Intelligence Brief at our website, or if you found this installment online, don’t forget to subscribe and get future email editions from us here. Also, if you have a tip or other information you’d like to send along directly to me, you can email me at micah [@] thedebrief [dot] org, or Tweet at me @MicahHanks.
Here are the top stories we’re covering right now…
- The Chicago O’Hare UAP Incident: Physics Team’s Analysis Offers a Fresh Look at This Famous 2006 Case
According to a group of physicists, witness descriptions of the famous 2006 Chicago O’Hare UAP incident are suggestive of a form of advanced propulsion that may one day revolutionize space travel.
- The Mysterious Nature of Dark Matter May Finally be Solved
A new study proposes that Dark Matter, the elusive substance that makes up 85% of the universe, is likely made up of ultralight particles known as axions and not the Weakly Interacting Massive Particles, or WIMPs, favored by most current theories.
- Pentagon Confirms New Details on Mysterious Balloon Being Tracked by U.S. Military Over Hawaii
A balloon of unknown origin is being tracked by the U.S. military after it passed over Hawaii, according to a Pentagon spokesperson and other officials familiar with the developing situation.
- New Satellite Images Reveal First Look at Chinese Airship and Laser Anti-Satellite Weapons Capabilities
New satellite images of a suspected Chinese airship were released online on Monday by geospatial-intelligence company BlackSky.
- Live Long and Be Fabulous… This week on The Debrief Weekly Report…
SUBSCRIBE TO ‘THE DEBRIEF WEEKLY REPORT’: Apple Podcasts | Spotify Join Stephanie Gerk and MJ Banias this week as they discuss the latest news from The Debrief. On this week’s episode, MJ and Stephanie dive into the push for LGBTQ+ inclusion in the space industry. Stephanie also muses over mysterious tracks left in Greenland by Elsa from Frozen while singing “Let It Go” at the top of her lungs. MJ decides that satellites aren’t real, and those green lasers seen over Japan are defiantly […]
- In a Surprise Move, Russia Says it Will Support International Space Station Operations Through 2028
Russia says it has agreed to continue to support the International Space Station (ISS) until 2028 despite mounting tensions between Moscow and Western nations, according to a recent statement from NASA.
- Density: What Matters in the Universe
If we ever use these dark components to fuel engines of interstellar spacecraft, we will need to know their local densities. Avi Loeb weighs in.
- New Concerns Over Threats from Space Revealed in Latest U.S. Classified Document Leak
This week’s newsletter examines the latest concerns arising from U.S. top-secret document leaks, and what they convey about China and Russia’s space programs.
- Breakthrough Smart Material Can Change Shape and Color In Response To Multiple Stimuli
Researchers have created a breakthrough smart material that can change its shape and color after being activated by heat and electricity.
- This LGBTQ+ Non-Profit is Getting The Attention of the Space Industry by Breaking Down Barriers “Gaaays in Spaaace” is pressing for more LGBTQ+ representation in the space industry, and they are using science fiction like Star Trek to do it…
LGBTQ+ non-profit “Gaaays in Spaaace” is pushing the space industry for change, and it’s using science fiction like Star Trek to do it.