On the Eve of An A.I. ‘Extinction Risk’? In 2023, Advancements in A.I. Signaled Promise, and Prompted Warnings from Global Leaders

In the field of artificial intelligence, OpenAI, led by CEO Sam Altman, along with the company’s ChatGTP chatbot and its mysterious Q* AI model, have emerged as leading forces within Silicon Valley.

While advancements in AI may hold the potential for positive future developments, OpenAI’s Q* and other AI platforms have also led to concerns among government officials worldwide, who increasingly warn about possible threats to humanity that could arise from such technologies.

2023’s Biggest AI Upset

Among the year’s most significant controversies involving AI, in November Altman was released from his duties as CEO of OpenAI, only to be reinstated 12 days later amidst a drama that left several questions that, to date, remain unresolved.

On November 22, just days after Altman’s temporary ousting as the CEO of OpenAI, two people with knowledge of the situation told Reuters that “several staff researchers wrote a letter to the board of directors,” which had reportedly warned about “a powerful artificial intelligence discovery that they said could threaten humanity,” the report stated.

In the letter addressed to the board, the researchers highlighted the capabilities and potential risks associated with artificial intelligence. Although the sources did not outline specific safety concerns, some of the researchers who authored the letter to OpenAI’s board had reportedly raised concerns involving an AI scientist team comprised of two earlier “Code Gen” and “Math Gen” teams, warning that the new developments that aroused concern among company employees involved aims to upgrade the AI’s reasoning abilities and ability to engage in scientific tasks.

In a surprising turn of events that occurred two days earlier on November 20, Microsoft announced its decision to onboard Sam Altman and Greg Brockman, the president of OpenAI and one of its co-founders, who had resigned in solidarity with Sam Altman. Microsoft said at the time that the duo was set to run an advanced research lab for the company.

Four days later Sam Altman was reinstated as the CEO of OpenAI after 700 of the company’s employees threatened to quit and join Microsoft. In a recent interview with Altman, he disclosed his initial response to his invitation to return following his dismissal, saying it “took me a few minutes to snap out of it and get over the ego and emotions to then be like, ‘Yeah, of course I want to do that’,” Altman told The Verge.

“Obviously, I really loved the company and had poured my life force into this for the last four and a half years full-time, but really longer than that with most of my time. And we’re making such great progress on the mission that I care so much about, the mission of safe and beneficial AGI,” Altman said.

But the AI soap opera doesn’t stop there. On November 30, Altman announced that Microsoft would join OpenAI’s board. The tech giant, holding a 49 percent ownership stake in the company after a $13 billion investment, will assume a non-voting observer position on the board. Amidst all this turmoil, questions remained about what, precisely, the new Q* model is, and why it had so many OpenAI researchers concerned.

So What is Q*?

Q* (pronounced “Q-star”) is believed to be a project within OpenAI that aims to use machine learning for logical and mathematical reasoning. According to reports, OpenAI has been training its AI to perform elementary school-level mathematics. Concerned employees at OpenAI had reportedly said Q* could represent a breakthrough in the company’s efforts to produce artificial general intelligence (AGI) that could surpass humans in the performance of various tasks, especially those that are economically valuable.

One source told Reuters on background last month that Q* could solve certain math problems extremely well, and that while the model is currently only as good as a grade-school student at math, the fact that it aced those tests has researchers feeling hopeful about Q*’s future success, the source said.

An ability to master mathematics suggests that AI could possess enhanced reasoning abilities similar to human intelligence, and experts believe this capability could have tremendous potential for groundbreaking scientific research. Nonetheless, in the aftermath of last month’s OpenAI drama, questions remain about the new technologies in development by the company which prompted at least some of its employees to think they could potentially threaten humanity.

World Leaders Express Concerns About AI

Looking back on the evolution of AI during 2023, several political figures from around the world have also shared their perspectives—and potential concerns—about the potential threats AI could represent if left unbridled.


On May 30, the Communist Party in China made a public statement warning countries around the world of the risks AI poses to future advancements, and called for heightened national security measures. After a meeting in May chaired by President Xi Jinping, the party leader emphasized the conflict between the government’s goal of being a global leader in advanced technology and worries about the potential negative impacts of these technologies on society and politics.

“It was stressed at the meeting that the complexity and severity of national security problems faced by our country have increased dramatically,” the Chinese state-run Xinhua News Agency reported after the meeting.

More recently, Jinping encouraged nations to join forces in addressing challenges posed by artificial intelligence this past November at the World Internet Conference Summit in the eastern city of Wuzhen, where he said China is ready to “promote the safe development of AI.” Li Shulei, director of the Communist Party’s publicity department, echoed Xi’s statements at the conference, expressing China’s commitment to collaborate with other nations to “improve the safety, reliability, controllability and fairness of artificial intelligence technology.”

Before the APEC Summit in San Francisco this past November there were that Biden and Xi might announce an agreement to restrict the use of artificial intelligence, particularly in areas like nuclear weapons control. However, no such agreement was reached. However, Biden later stated that, “we’re going to get our experts together to discuss risk and safety issues associated with artificial intelligence.”


On December 12 at the Global Partnership on Artificial Intelligence Summit in Delhi, Prime Minister Narendra Modi emphasized the potential danger posed by artificial intelligence. Some of the threats Modi highlighted consisted of “deep fake” technology and potential terrorist activity that might leverage AI, although Modi also said he expects great things for his country that could result from AI, which includes “the potential to revolutionize India’s tech landscape”.

“AI has several positive impacts,” Modi said, “but it could also have many negative impacts and this is a matter of concern. AI can become the biggest tool to help humanity’s development in the 21st century. But it can also play a major role in destroying us. Deepfake, for example, is a challenge for the world.”

“If AI weapons reach terrorist organisations, it could pose a threat to global security. We have to move quickly to create a global framework for ethical use of AI among G20 countries. We have to take such steps together (so) that we take responsible steps,” the Prime Minister said.

Just last month, the Prime Minister urged for secure steps to ensure the safety of AI across all sectors of society and called on G20 nations to join forces on this issue, emphasizing the importance of AI reaching people while prioritizing safety.


On October 7, Canadian Innovation Minister François-Philippe Champagne addressed the development and authority of artificial intelligence, the minister was clear in stating that his role was to shift the narrative “from fear to opportunity.” But decided to remain silent when asked if he viewed AI as a potential threat to humanity,  he reframed from stating his thoughts or position on the question.

CTV’s Question Period hosted an interview with Champagne the following day on Sunday, October 8, and expressed to the host, Vassy Kapelos the importance of transparency when interacting and managing AI technology.

Champagne voiced his thoughts and said he advocates for an AI framework that navigates Canadians’ concerns with the advancement of technology and the development of “responsible innovation.” Champagne also said he would “let the experts debate what it could do,” emphasizing his primary duty is to steer the shift “from fear to opportunity.”

When questioned again about his opinions on whether AI is a threat, Champagne said “there is a sense of anxiety, but at the same time, AI can do great things for humanity,” adding that “It’s for us to decide what we want AI to be.”

“Artificial intelligence (AI) technologies offer promise for improving how the Government of Canada serves Canadians. As we explore the use of AI in government programs and services, we are ensuring it is governed by clear values, ethics, and laws,” reads a statement on the Canadian Government website.

The Canadian government has been defining AI regulations since June 2022 with the Artificial Intelligence and Data Act, baked into the larger Bill C-27.

However, critics and experts have stated Bill C-27 and the voluntary code of conduct are too ambiguous.

“I am hopeful it can do good things for humanity,” Champagne said in response to a question about whether AI scares him. “But at the same time, we need to prevent the really bad stuff that you say experts have been warning us (about).”


President Frank-Walter Steinmeier of Germany has advocated for enhanced digital literacy in society to address the threats that the swift integration of artificial intelligence poses to democracy.

Steinmeier said in June that such concerns are becoming more pressing, especially as disinformation can be rapidly created and disseminated, instilling fears and confusion in the public, discrediting science, and destabilizing financial markets.

Steinmeier added that societies should develop ethical and legal frameworks to watch over AI, whether it’s being used to help with decision-making processes or for people to uncover instances when it’s being used maliciously.

“We’ve been warned that potentially uncontrollable risks are coming our way,” Steinmeier said. “And that deserves our attention.”


Russian President Vladimir Putin has also entered the global AI conversation on the competition for AI development. Putin forecasted that the nation at the forefront of AI research would assert dominance in global affairs.

“Artificial intelligence is the future, not only for Russia, but for all humankind,” Putin said to students during a Russian Knowledge Day event earlier this year. “It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

The overall threat AI potentially represents to humanity raises not only political concerns as nations compete to leverage the technology, but also issues resulting from the unforeseen consequences of how a future superintelligence might behave. The concern that AI might one day overtake humanity was once a narrative relegated only to science fiction. However, today as such technologies advance, voices around the world are increasingly urging that care must be employed in their development, to mitigate the numerous dangerous potentials that could arise from the misuse of machine intelligence.

Chrissy Newton is a PR professional and founder of VOCAB Communications. She hosts the Rebelliously Curious podcast, which can be found on The Debrief’s YouTube Channel. Follow her on X: @ChrissyNewton and at