Suspend super AI research and development for at least 6 months! Thousands of experts including Musk, Turing Award winners signed a joint letter "Stop"
GPT-4 is so strong that not only the masses are panicking, but today, AI leaders from all over the world are also taking action! Thousands of people issued a joint letter calling on everyone to suspend training AI stronger than GPT-4.
Just now, a joint letter from thousands of bigwigs was exposed on the Internet, wanting to ban all AIs stronger than GPT-4!
In this joint letter, more than 1,000 bigwigs appealed that we should immediately stop training AI systems that are more powerful than GPT-4, and the suspension period will be at least 6 months.
Currently signed are Turing Award winner Yoshua Bengio, Stability AI CEO Emad Mostaque, Apple co-founder Steve Wozniak, New York University professor Marcus, Musk, and "A Brief History of Humanity" author Yuval Noah Harari, etc.
Look at this signature that can't be seen at all, the content of the boss is too high.

Being able to collect the signatures of more than a thousand big shots, I must have been preparing for this joint name for a long time.
However, Yann LeCun, who is also a Turing Award winner, did not sign: "I do not agree with this premise."
In addition, a so-called "OpenAi CEO" also appeared in the signature, but according to the speculation of Marcus and a group of netizens, it should not be signed by himself. The whole thing is very confusing.
Before the deadline, he also specifically @ Sam Altman to confirm the news.
Big guys joint name: Stop AI that is stronger than GPT-4
The open letter stated that a large number of studies have shown that AI systems with human competitive intelligence recognized by top AI laboratories may pose profound risks to society and human beings.
As outlined in the widely recognized Asilomar AI Principles, advanced AI could mean profound changes in the history of life on Earth, and should be planned and managed with appropriate consideration and resources.
Unfortunately, so far, no one has taken action.
In recent months, AI labs around the world have been completely out of control, and they are frantically launching an AI race to develop and deploy more powerful AI, and no one can understand, predict, control these AIs, not even their creators. both are not.
Now that AI systems have become as competitive as humans in general systems, we must ask ourselves:
Should machines be allowed to flood our information channels with propaganda and lies? Should all actions, even satisfying jobs, be automated? Should we develop nonhuman intelligences that may one day overtake us, obsolete us, and replace us? Should it risk losing control of human civilization?
No unelected technology leader has the authority to make decisions of this magnitude.
Only when we are confident that the impact of such AI is positive and the risks are manageable should we develop powerful AI systems. And we have to have good reasons to believe it, and the greater the potential impact, the more we need to be convinced of it.
OpenAI's recent statement on general artificial intelligence states that "at some stage, independent review may be important before starting to train future systems, and there should be a limit to the rate at which the computing power used to create new models can grow."
We agree with that, and that point in time, is now.
Therefore, we call on all AI labs to immediately suspend training AI systems stronger than GPT-4 for at least 6 months.
This suspension should be open to all, verifiable by all, and involve all key members. If there is no quick moratorium, the government should step in.
During this 6-month period, all AI labs and independent academics should work together to develop a shared set of security protocols for the design and development of advanced AI. After the agreement is completed, it should be rigorously audited and supervised by independent external experts. These protocols must ensure the unquestionable safety of these AI systems.
This is not to say that there is an absolute moratorium on AI development, only that we should take a step back from the dangerous race to emergent, larger, unpredictable black-box models.
All artificial intelligence research and development should refocus on this point - to make today's most powerful SOTA model more accurate, safe, explainable, transparent, robust, aligned, trustworthy and loyal to humans.
At the same time, developers must work with policymakers to dramatically accelerate the development of robust AI management systems.
The system should at least include:
- A body dedicated to regulating AI
- Provenance and watermarking systems to help distinguish real from generated content and enable tracking of model leaks
- Robust audit and certification system
- Clarify who is liable when AI causes harm
- Provide strong public funding for AI safety technology research
Human beings can enjoy a prosperous future brought by AI. After successfully creating powerful AI systems, we can enjoy the "summer of AI" and reap the rewards of designing said systems to benefit all and give all of humanity a chance to adapt.
Right now, our society has put a pause on other technologies that could have disastrous effects. We should do the same with AI.
Let's enjoy a long AI summer instead of heading into fall unprepared.
Soon, this open letter signed by thousands of bigwigs caused an uproar in public opinion.
Those who support it believe that the panic about AI is reasonable, because its training efficiency is too high, and the level of intelligence is expanding every day.
Opponents even put up propaganda posters that Edison instructed people to paint that alternating current killed people, thinking that this was just an inexplicable accusation, and that forces with ulterior motives were misleading the masses who did not know the truth.
Sam Altman, with an intriguing attitude
Judging from recent events, the arrival of this letter can be said to have come out after a long-awaited call.
From the end of November last year, ChatGPT seemed to have sounded the starting gun, and AI institutions all over the world were sprinting madly, and their eyes were red.
And OpenAI, the "instigator", has not slowed down at all. Together with the sponsor's father, Microsoft, it will give us a critical blow every once in a while.
The panic brought about by advanced AI tools has hit everyone wave after wave.
Today, the big guys finally made a move.
In the interview made public yesterday, Sam Altman had some intriguing expressions in his words.
He said that why the GPT series has reasoning ability, the researchers of OpenAI can't figure it out.
They only know that in continuous testing, people suddenly discovered that starting from ChatGPT, the GPT series began to have reasoning capabilities.
In addition, Altman also said such a shocking sentence in the interview: "AI may indeed kill humans."
In addition to Altman, the godfather of artificial intelligence Geoffrey Hinton, Bill Gates, and New York University professor Gary Marcus have recently issued warnings: AI eradication of human beings is really not empty talk.
OpenAI researcher predicts: AI will know that it is AI
Coincidentally, Richard Ngo from the OpenAI governance team also predicted the development of AI in two years.
Before that, he was a research engineer on DeepMind's AGI security team.

According to Richard's prediction, the neural network will have the following characteristics by the end of 2025:
- Have human-level situational awareness, such as knowing that you are a neural network, etc.
- outperforms humans in writing complex and efficient plans
- Do a better job than most peer reviews
- Can design, code and distribute complete applications in-house
- Outperform anyone on any computer task that a white-collar worker can do in 10 minutes
- Write award-winning short stories and books up to 50,000 words
- Generate a coherent 20-minute movie

Still, good humans will still do better (albeit much more slowly) at:
- writing a novel
- Execute a plan steadily for several days in a row
- Make breakthroughs in scientific research, such as innovations in theorems (although neural networks have proven at least one)
- Completing typical manual labor tasks compared to robots controlled by neural networks
To add here, in layman's terms, situational awareness refers to an individual's perception, understanding and prediction of events and situations occurring in the surrounding environment. This includes being aware of dynamic changes in the environment around you, assessing the impact of those changes on yourself and others, and anticipating possible future situations.
For the specific definition of situational awareness in AI research, you can refer to the following paper:

Paper address: https://arxiv.org/abs/2209.00626
Richard said that his forecast is actually closer to 2 years, but because different people will use different evaluation criteria, 2.75 years seems to be more robust.
In addition, the "prediction" mentioned here means that Richard believes that the credibility of this view is more than 50%, but not necessarily much higher than 50%.
It should be noted that the predictions are not based on any specific information about OpenAI.
Netizens are eagerly waiting for GPT-5
Compared with the very cautious bosses, netizens obviously can't wait for the arrival of GPT-5 after experiencing the explosive performance of GPT-4.
Recently, predictions about GPT-5 have sprung up like mushrooms...
(unofficial)
According to the prediction of a mysterious team, GPT-5 will bring a series of exciting functions and enhanced performance on the basis of GPT-4, such as comprehensive transcendence in reliability, creativity and adaptation to complex tasks.
· Personalized templates: Customized based on the user's specific needs and input variables to provide a more personalized experience.
· Allow users to adjust the default settings of AI: including professionalism, humor level, speaking tone, etc.
· Automatically convert text into different formats: such as still images, short videos, audio and virtual simulations.
· Advanced Data Management: Includes logging, tracking, analyzing and sharing data to streamline workflow and increase productivity.
Assisted Decision Making: Assists users in making informed decisions by providing relevant information and insights.
Stronger NLP capabilities: Enhance AI's understanding and response to natural language, making it closer to humans.
Integrated Machine Learning: Allows AI to continuously learn and improve, adapting to user needs and preferences over time.
GPT-4.5 as a transition
In addition, the team predicts that the interim GPT-4.5 model will be launched in September or October 2023.
GPT-4.5 will build on the strengths of GPT-4 released on March 12, 2023, bringing more improvements to its dialogue capabilities and contextual understanding:
- handle longer text input
GPT-4.5 may process and generate longer text inputs while maintaining context and coherence. This improvement will improve the model's performance in handling complex tasks and understanding user intent.
- enhanced coherence
GPT-4.5 may provide better coherence, ensuring that the generated text stays focused on relevant topics throughout the conversation or content generation process.
- more accurate response
GPT-4.5 may provide more accurate and context-sensitive responses, making it a more effective tool for various applications.
- Model fine-tuning
In addition, users may also be able to more easily fine-tune GPT-4.5 to more effectively customize the model and apply it to specific tasks or domains, such as customer support, content creation, and virtual assistants.
Referring to the current situation of GPT-3.5 and GPT-4, GPT-4.5 is likely to lay a solid foundation for the innovation of GPT-5. By addressing the limitations of GPT-4 and introducing new improvements, GPT-4.5 will play a key role in shaping the development of GPT-5.
References:
https://futureoflife.org/open-letter/pause-giant-ai-experiments/
https://chatgpt-5.ai/gpt-5-capabilities/
https://twitter.com/RichardMCNgo/status/1640568775018975232