“I don’t really understand why they can do it.” The Godfather of AI Returns With a Warning

AI

Welcome to this week’s installment of The Intelligence Brief… in recent days, the “Godfather of artificial intelligence” has returned to offer more words of caution about the potential issues humankind could face in the years ahead with AI. In our analysis this week, we’ll look at some of professor Geoffrey Hinton’s most recent statements, which include 1) why AI still can’t match us, but how they’re getting close, 2) the one thing Hinton says he doesn’t understand that AI is seemingly able to do, and which varieties can do it, 3) the dangers of military use of AI, and 4) economic impacts AI could have on humans.

Quote of the Week

“By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”

– Eliezer Yudkowsky

Latest Stories: Before getting into our analysis this week, a few of the stories we’re covering at The Debrief include how DARPA has announced it’s moving forward with developing a novel VTOL naval support drone codenamed ANCILLARY. Elsewhere, what is The Pancosmorio Theory? You’ll have to read and find out… and as always, you can get links to all our latest stories at the end of this week’s newsletter.

Podcasts: This week in podcasts from The Debrief, in the latest episode of  The Debrief Weekly ReportStephanie Gerk and MJ Banias groove to amino acid slow jams as they discuss lunar mysteries and ultrafast white dwarfs. Meanwhile, this week on The Micah Hanks Program, career R&D chemist Robert Powell of the Scientific Coalition for UAP Studies joins us to discuss large UAP detected on radar. You can subscribe to all of The Debrief’s podcasts, including audio editions of Rebelliously Curious, by heading over to our Podcasts Page. 

Video News: Premiering this Friday on Rebelliously CuriousChrissy Newton speaks with Richard Mansell, Chief Executive Officer and Co-founder of IVO Ltd, as they discuss how the organization is working on a new all-electric thruster called The IVO Quantum Drive that draws limitless power from the Sun. Also, check out the latest episode of Ask Dr. Chance, where Chance has a conversation with Tim Russ from Star Trek. Be sure to watch these videos and other great content from The Debrief on our official YouTube Channel.

With all that behind us, it’s time to examine the most recent cautionary words from a former innovator in the field of AI who now warns about the technology in his latest public statements.

Godfather of AI Raises New Concerns About Machine Intelligence

Computer scientist and cognitive psychologist Geoffrey Hinton, known by many as the “Godfather of artificial intelligence,” has returned with more warnings about the potential perils of artificial intelligence (AI), a bourgeoning area of technology he had a direct hand in helping develop.

Speaking at the recent three-day Collision technology conference in Toronto, the mild-mannered British-Canadian scientist warned that although machine intelligence is not yet quite on par with human intelligence, the rate at which AI is advancing and becoming increasingly capable of mimicking humans is alarming.

Geoffrey Hinton
Computer scientist Geoffrey Hinton, the “Godfather of AI” (Credit: Geoffrey Hinton/Twitter).

“They still can’t match us, but they’re getting close,” Hinton said during the event.

So what else has AI’s godfather, having now become one of the technology’s most vocal challengers to warn about its potential misuse, recently said about why we should be concerned about artificial intelligence?

What the Godfather of AI Says He Doesn’t Understand About AI

“It’s the big language models that are getting close,” Hinton said during the Collision event when asked which forms of artificial intelligence are progressing the most in terms of matching human intelligence.

Hinton, who several months ago left his position at Google in order to free himself to speak more publicly about potential dangers related to AI, made headlines for his grave outlook on the future of the technology if left unchecked.

However, during the recent Collision event, Hinton admitted something startling about the language models that are currently the closest to matching the capabilities of humans.

“I don’t really understand why they can do it, but they can do little bits of reasoning,” Hinton said.

Stop and ponder Hinton’s words for a moment: one of the chief innovators in the field of artificial intelligence admits that he doesn’t “really understand why” some large language models are capable of “little bits of reasoning” that appear to be comparable to human logic and reasoning.

Humans Are Machines, Too

“We’re just a machine,” Hinton said during the recent event. “We’re a wonderful, incredibly complicated machine, but we’re just a big neural net.”

In essence, AI may be capable of doing seemingly extraordinary things because it is increasingly beginning to function in the same ways that humans do.

“And there’s no reason why an artificial neural net shouldn’t be able to do everything we can do,” Hinton added.

Geoffrey Hinton
Geoffrey Hinton lecturing at an event in British Columbia (Credit: Eviatar Bach)

“But I think we have to take seriously the possibility that if they get to be smarter than us, which seems quite likely, and they have goals of their own, which seems quite likely, they may well develop the goal of taking control.”

History shows that the use of power and great technological capabilities have generally resulted in circumstances that benefit only a few, while being less than beneficial to others. If human behavior is any indication of what kinds of problems might arise from machine intelligence in the future, it may not be an unreasonable assumption that AI may follow suit, although gauging what the potential motivations of such an AI may be is more difficult.

Regardless of the circumstances, and whether or not it’s even intended by any prospective AI, “if they do that, we’re in trouble,” Hinton warned.

Machine Wars With Battle Bots

Although in most cases we have no way of knowing whether AI may develop intentions that could negatively impact humans, there are a few exceptions. One, according to Hinton, involves the intentional use of AI by militaries for use with warfighting systems.

“If defense departments use [AI] for making battle robots, it’s going to be very nasty, scary stuff,” Hinton said, emphasizing that warfighting capabilities driven by AI could prove to be disastrous “even if it’s not super intelligent, and even if it doesn’t have its own intentions.”

AI
Conceptual art depicting a battle robot (Credit: Pixabay).

An example Hinton gives could entail destruction resulting from a form of AI that “just what Putin tells it to.”

“It’s gonna make it much easier, for example, for rich countries to invade poor countries.

“At present, there’s a barrier to invading poor countries willy nilly,” Hinton said, “which is you get dead citizens coming home. If they’re just dead battle robots, that’s just great, the military-industrial complex would love that.”

They’re Gonna Take Our Jobs

Even beyond military concerns, Hinton warns that economic issues that could arise from AI are worrisome in that the likelihood that AI could fill roles in areas of industry where less skilled workers generally seek employment seems evident.

“The jobs that are gonna survive AI for a long time are jobs where you have to be very adaptable and physically skilled,” Hinton said. “And plumbing is that kind of a job.”

While Hinton continues to raise concerns about where the development of AI might lead, he also says there is potential for good.

“I think progress in AI is inevitable, and it’s probably good, but we seriously need to worry about mitigating all the bad sides of it and worry about the existential threat,” Hinton said.

That concludes this week’s installment of The Intelligence Brief. You can read past editions of The Intelligence Brief at our website, or if you found this installment online, don’t forget to subscribe and get future email editions from us here. Also, if you have a tip or other information you’d like to send along directly to me, you can email me at micah [@] thedebrief [dot] org, or Tweet at me @MicahHanks.

Here are the top stories we’re covering right now…