HomeAI News
The Turing Award giants opposed the "joint letter": the so-called "suspension of research and development of super AI" is nothing but "secret research and development"
10

The Turing Award giants opposed the "joint letter": the so-called "suspension of research and development of super AI" is nothing but "secret research and development"

Hayo News
Hayo News
March 30th, 2023
View OriginalTranslated by Google
One day after the release of the AI non-proliferation treaty signed by thousands of people, various bigwigs responded one after another, and the conversations were intriguing.

Yesterday, a joint letter written by thousands of bigwigs to suspend super-strong AI training for six months, like a bomb, exploded on the Internet at home and abroad.

After a day of verbal confrontation, several key figures and other big names from all walks of life came out to respond publicly.

Some are very official, some are very personal, and some do not face the problem directly. But one thing is certain, regardless of the views of these bigwigs themselves, or the interest groups they represent behind them, they are all worth scrutinizing.

Interestingly, among the Turing Big Three, one took the lead in signing, one strongly opposed, and one did not say a word.

Bengio signed, Hinton silent, LeCun objected

Andrew Ng (opposition)

In this incident, Wu Enda, a former Google Brain member and founder of the online education platform Coursera, is a clear-cut opponent.

He clearly showed his attitude: the idea of suspending "making AI progress beyond GPT-4" for 6 months is a bad idea.

He said that he has seen many new AI applications in education, health care, food and other fields, and many people will benefit from it. And improving GPT-4 would also be beneficial.

What we should do is to strike a balance between the huge value created by AI and the real risks.

Regarding the statement in the joint letter that "if the training of super AI cannot be suspended quickly, the government should be involved", Wu Enda also said that this kind of thinking is very bad.

Asking governments to suspend emerging technologies they don't understand is anti-competitive, sets a bad precedent and is a terrible policy innovation, he said.

He acknowledged that responsible AI is important and that AI does have risks.

But the "AI companies are releasing dangerous codes like crazy" that the media has rendered is obviously too exaggerated. The vast majority of AI teams take responsible AI and safety very seriously. But he also admitted that "unfortunately, not all of them".

Finally he reiterated:

A 6-month moratorium is not a practical proposal. To improve AI safety, regulations around transparency and auditing will be more practical and have a greater impact. As we advance technology, let us also invest more in safety, not stifle progress.

And under his twitter, netizens have already expressed strong opposition: The reason why the boss is calm is probably because the pain of unemployment will not fall on them.

LeCun (opposition)

As soon as the joint letter was sent out, some netizens rushed to tell: Turing Award giants Bengio and LeCun both signed the letter!

And LeCun, who surfs the front line of the Internet from time to time, immediately refuted the rumors: No, I didn't sign it, and I don't agree with the premise of this letter .

Some netizens said that I also disagree with this letter, but I am very curious: the reason why you disagree with this letter is that you think LLM is not advanced enough to threaten human beings at all, or is it for other reasons?

But LeCun didn't answer any of these questions.

After 20 hours of enigmatic silence, LeCun suddenly forwarded a tweet from a netizen:

"OpenAI waited 6 months to release GPT4! They even wrote a white paper for it...."

In this regard, LeCun praised: You are right, the so-called "suspended R&D" is nothing more than "secret R&D", which is exactly the opposite of what some signatories hoped.

It seems that nothing can be hidden from LeCun's discerning eyes.

The netizen who asked the question before agreed: This is why I am against this petition - no "bad guy" will really stop.

"So it's like an arms treaty that no one keeps? Isn't there a lot of examples in history?"

After a while, he retweeted another bigwig's tweet.

The bigwig said, "I didn't sign it either. The letter is filled with a bunch of horrible rhetoric and ineffective/non-existent policy prescriptions." LeCun said, "I agree."

Bengio and Marcus (pro)

The first person to sign the open letter is the famous Turing Award winner Yoshua Bengio.

Of course, New York University professor Marcus also voted in favor. He seems to be the first to expose the open letter.

After the discussion became more and more noisy, he also quickly posted a blog to explain his position, which was still full of bright spots.

Breaking news: The letter I mentioned earlier is now public. The letter calls for a six-month moratorium on the training of AI "more powerful than GPT-4." Many famous people signed. I also joined. I didn't participate in drafting it, because there are other things to fuss about (for example, what AI is more powerful than GPT-4? Since the details of GPT-4's architecture or training set have not been released, we How did you know that?)—but the spirit of the letter is one I support: until we can get a better handle on risks and rewards, we should proceed with caution. It will be very interesting to see what happens next.

Another point of view that Marcus agreed with 100% just now is also very interesting. This point of view says:

GPT-5 will not be AGI. Almost certainly, no GPT model would be AGI. It is absolutely impossible for any model optimized by the methods we use today (gradient descent) to be AGI. The upcoming GPT model is sure to change the world, but over-hyping is insane.

Altman (noncommittal)

As of now, Sam Altman has not made a clear statement on this open letter.

However, he did express some views on general artificial intelligence.

The elements that make up a good general artificial intelligence: 1. Aligning the technical capabilities of superintelligence 2. Adequate coordination among most leading AGI efforts

Some netizens questioned: "Align with what? Align with who? Align with some people means not align with others."

This comment came to light: "Then you should let it open."

Another founder of OpenAI, Greg Brockman, reposted Altman's tweet, emphasizing again that OpenAI's mission "is to ensure that AGI benefits all mankind."

Once again, some netizens pointed out the Huadian point: You guys say "align with the designer's intention" all day long, but no one knows what alignment means.

One more voice: Yudkowsky (radical)

Another decision theorist, Eliezer Yudkowsky, was even more radical:

Pausing AI development is not enough, we need to turn off AI altogether! All off!

If it continues, we will all die.

As soon as the open letter was released, Yudkowsky immediately wrote a long article and published it in TIME magazine.

He said he did not sign the letter because, in his view, the letter was too benign.

The letter underestimated the seriousness of the situation and asked for too little to be resolved.

The key issue, he said, is not the intelligence to "compete with humans." Obviously, when AI becomes smarter than humans, this step is obvious.

The point is that many researchers, including him, believe that the most likely consequence of building an AI with superhuman intelligence is that everyone on the planet will die.

Not "maybe", but "must".

Without sufficient precision, the most likely outcome is that we create AI that won't do what we want it to do, or care about us, or other sentient beings.

In theory, we should be able to teach AI this kind of caring, but right now we don't know how.

Without this kind of care, the result we get is: AI doesn't love you, doesn't hate you, you're just a bunch of atomic stuff that can be used for anything.

And if humans want to resist superhuman AI, they will inevitably fail, just like "the 11th century tried to defeat the 21st century", "Australopithecus tried to defeat Homo sapiens".

Yudkowsky said that the AI we imagine doing bad things is a thinker who lives on the Internet and sends malicious emails to humans every day, but in fact, it may be a hostile superhuman AI, a thinking speed millions of times faster. Human alien civilization, in its view, human beings are stupid and slow.

When this AI is smart enough, it won't just stay in the computer. It can email a DNA sequence to a lab, the lab will produce proteins on demand, and the AI has life forms. Subsequently, all living things on Earth will die.

In this situation, how should humans survive? Right now we have no plans. If we are still ignorant of GPT-4, and GPT-5 has evolved amazing capabilities, just like from GPT-3 to GPT-4, then it is difficult for us to know whether humans created GPT-5 or AI itself .

In Yudkowsky's view, people should guarantee that superhuman AI "doesn't kill a single human being" for at least 30 years. We simply can't learn from our mistakes, because once you're wrong, you're dead.

References:

https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

https://twitter.com/AndrewYNg/status/1641121451611947009

https://garymarcus.substack.com/p/a-temporary-pause-on-training-extra

Reprinted from 新智元View Original

Comments

no dataCoffee time! Feel free to comment