Experts warn: AI could lead to human extinction

Protesters outside Sam Altman's speech in London
Experts, including the heads of OpenAI and Google's Deepmind, have warned that artificial intelligence could lead to the extinction of humanity.
Dozens of experts issued a joint statement on the webpage of the Center for Artificial Intelligence Safety , which read: "Reducing the risk of extinction caused by artificial intelligence should be a global urgent issue at present, and it should not be compared with other large-scale social risks such as pandemics. As grim as a nuclear war."
OpenAI CEO Sam Altman, who developed ChatGPT, Google DeepMind CEO Demis Hassabis, and Anthropic's Dario Amodei all supported the statement.
The AI Safety Center website lists some potentially catastrophic scenarios:
- AI could be used as a weapon - for example, drug discovery tools could be used to create chemical weapons
- Misinformation generated by AI could destabilize societies and 'undermine collective decision-making'
- The power of AI may become increasingly concentrated in fewer hands, enabling "regime enforcement of narrow values through pervasive surveillance and repressive censorship"
- Weakness phenomenon, in which humans rely on AI, similar to the scene in the movie "Wall E"
Dr Geoffrey Hinton, who earlier warned about the risks of superintelligent AI, also backed the AI Safety Center's call.
Yoshua Bengio, a professor of computer science at the University of Montreal, also signed the joint statement. (Dr. Hinton, Prof. Bengio, and NYU Prof. Yann LeCun, often referred to as the "godfathers of artificial intelligence" for their pioneering work in the field, jointly won the 2018 Turing Award for excellence in computer science contribute.)
However, Professor LeCun, who also works at Meta, said these doomsday predictions were overblown, tweeting that "the most common reaction among AI researchers to these catastrophic predictions is to cover their faces".
fragmented reality
But at the same time, many experts in the AI field agree that fears of AI destroying humanity are unrealistic and overlook issues like bias, which are already a problem in existing systems.
Arvind Narayanan, a computer scientist at Princeton University, has said that sci-fi disaster scenarios are unrealistic: "Current AI is nowhere near capable enough to make these risks a reality. So it distracts people from the short-term dangers of AI ."
Compared with the long-term problem of destroying human beings, Elizabeth Renieris, a senior researcher at the AI Ethics Institute at Oxford University, is more worried about problems that are closer to the current risks.
“Developments in AI will amplify bias, discrimination, exclusion, or other inequities in automated decision-making, while also making them incomprehensible and unchallengeable,” she said. These phenomena will “result in an exponential increase in the amount and spread of disinformation.” escalation that ruptures reality, erodes public trust, and exacerbates inequality, especially for those on the wrong side of the digital divide.”
Renieris said that many AI tools are effectively "free riding" on "the entire human experience to date." Many AI tools have been trained by humans to create content, text, art, and music, and then imitated, and their creators have "effectively transferred vast amounts of wealth and power from the public domain to a few private entities."
But Dan Hendrycks, director of the Center for AI Safety, said that future risks and current problems "should not be viewed as antagonistic" and that "solving some of the problems now is beneficial in addressing many of the risks in the future."
Super AI Efforts
Media coverage of the so-called "existential" threat posed by artificial intelligence has been rampant since March 2023, when experts including Tesla boss Elon Musk signed an open letter calling for a halt to development The next generation of artificial intelligence technology.
The letter asked: "Should we develop non-human minds that may eventually surpass, outsmart, render us obsolete and replace us?"
Instead, the new campaign has a very brief statement aimed at "opening up the discussion."
The statement compared the risk to that posed by nuclear war. Recently, OpenAI suggested in a blog post that super AI might be regulated like nuclear energy: "We may eventually need an agency similar to the IAEA (International Atomic Energy Agency) for superintelligence efforts."
"sit and relax"
Sam Altman and Google CEO Sundar Pichai recently discussed AI regulation with UK Prime Minister Rishi Sunak.
Rishi Sunak highlighted the economic and social benefits of the latest warnings about the risks of AI.
"You've seen recently that AI has helped paralyzed people walk, new antibiotics have been discovered, but we need to make sure this is done in a safe and secure way," he said, adding: "That's why last week I Meet the CEOs of the major AI companies and discuss what protections we need to put in place, what kind of regulation should be in place to keep us safe."
“People would be concerned about the existential risks posed by AI mentioned in the report, such as a pandemic or nuclear war.”
"I want them to be reassured that the government is looking at this very carefully."
Sunak said he discussed the issue with other leaders at the G7 summit of industrialized nations recently, and would bring it up again soon in the United States.
The G7 recently formed a working group on AI.