A new warning about the potential dangers artificial intelligence (AI) may represent for humankind in the years ahead was voiced on Tuesday by leading scientists and tech industry experts.
“Mitigating the risk of extinction from AI should be a global priority,” read a portion of the statement that appeared at the website of the Center for AI Safety (CAIS), which focuses on reducing “societal-scale risks from AI through research, field-building, and advocacy.”
The CAIS statement ranked potential threats from AI alongside pandemics and nuclear war as dangers that could gravely impact life on Earth.
Among those who added their signatures to the online statement had been Geoffrey Hinton, who has previously warned about AI’s destructive potential after having worked for years as an innovator in machine learning.
Hinton recently quit his job at Google over concerns related to AI, and his ability to address the issue publicly. ChatGPT CEO Sam Altman and dozens of others also signed the statement.
While many experts have voiced concerns about potential dangers related to AI over the last decade, debate over the issue has seen an escalation since the arrival of AI chatbots like ChatGPT and Google’s Bard.
“While AI has many beneficial applications, it can also be used to perpetuate bias, power autonomous weapons, promote misinformation, and conduct cyberattacks,” according to a FAQ page at the CAIS website.
“Even as AI systems are used with human involvement, AI agents are increasingly able to act autonomously to cause harm,” it adds.
CAIS groups potential risks from artificial intelligence into eight categories that comprise its potential weaponization, AI resulting in the spread of misinformation, proxy gaming (where AI may pursue goals at the expense of individuals and society), enfeeblement resulting from overreliance on AI, the “value lock-in” of potentially oppressive systems, emergent goals AI may possess which could result in loss of human control, use of deception by AI, and power-seeking behavior that could result in any number of potential AI takeover scenarios.
In response to warnings about the potential misuse of AI and unforeseen complications that could result from its development, many countries are now working to regulate the development of AI.
The European Union AI Act is presently set to become “the first law on AI by a major regulator anywhere,” addressing three primary risk categories stemming from AI as the target of the forthcoming law.
The CAIS statement issued on Tuesday is not the first instance where industry leaders and scientists have collectively warned about such dangers. Earlier this year, Elon Musk and more than 1000 others signed an open letter that called for a six-month pause in the development of AI to allow time to gauge potential outcomes and weigh risk factors.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” a portion of the letter states.
“This confidence must be well justified and increase with the magnitude of a system’s potential effects.”