HomeAI News
OpenAI releases AI safety guidelines: Board of directors has the power to block the release of new AI models
23

OpenAI releases AI safety guidelines: Board of directors has the power to block the release of new AI models

Hayo News
Hayo News
December 19th, 2023
View OriginalTranslated by Google

On Monday (December 18) local time, the artificial intelligence (AI) research company OpenAI announced a set of guidelines on preventing AI risks. One rule worth paying attention to is that even if the CEO and other company leaders believe that the AI ​​model is safe , the board can also prevent its publication.

The chatbot ChatGPT developed by OpenAI has been released for a whole year. This phenomenal application has greatly promoted the development of the AI ​​field, and with it comes concerns about the out-of-control development of AI.

Regulators around the world are formulating their own AI regulations, and OpenAI, a leader in the AI ​​field, is also taking action.

On October 27 this year, OpenAI announced the formation of a security team called "Preparedness" to minimize the risks brought by AI.

The Preparedness team is led by Aleksander Madry, director of the Center for Deployable Machine Learning at MIT. The team will conduct capability assessments and red team testing of various AI models to track, predict and prevent various types of catastrophic risks.

OpenAI released guidelines called the Preparedness Framework on Monday, emphasizing that the set of guidelines is still in a testing phase.

It is reported that the Preparedness team will send monthly reports to a new internal security advisory group, which will then analyze them and submit recommendations to OpenAI CEO Sam Altman and the board of directors. Altman and company executives can decide whether to release new AI systems based on these reports, but the board of directors has the power to reverse this decision.

As one of the co-founders of OpenAI, Altman was a member of the company's board of directors not long ago, but was removed from the company in a personnel shock and left OpenAI for a time. Although Altman was eventually able to return to OpenAI and continue to serve as CEO, he did not have a seat on the newly formed "initial" board of directors.

The Preparedness team will repeatedly evaluate OpenAI's most advanced, yet-to-be-released AI models and rate them into four levels based on different types of perceived risks, from low to high, namely "low", "medium", "high" and "severe". Under the new guidelines, OpenAI will only roll out models rated "low" and "medium."

Comments

no dataCoffee time! Feel free to comment