The EU reaches a preliminary agreement to regulate generative AI tools, adding additional rules to higher-capability models such as GPT-4
According to reports from the Washington Post, Engadget and other foreign media, as the world scrambles to deal with the risks posed by the rapid development of AI, EU officials reached a "landmark" agreement on Friday local time - "Artificial Intelligence" Interim Agreement on the AI Act. This will be the most comprehensive regulatory agreement on AI in the region and the broadest and most far-reaching bill of its kind to date .
Thierry Breton, head of the EU's internal market, announced on the X platform on Friday local time that representatives from the European Commission, the European Parliament and 27 member states agreed to conduct a series of controls on generative AI such as ChatGPT and Bard that can produce content according to instructions.
Although the draft legislation still needs to be formally approved by EU member states and parliaments, it marks a key step in EU policy: it will regulate the development and dissemination of machine learning and AI models, and their use in education, Impact on applications in employment, healthcare and other fields .
The development of AI will be divided into four categories, distinguished by the degree of social risk that each category may bring: minimal, limited risk, high risk, and prohibited .
Prohibited: Includes any behavior that circumvents user consent, targets protected groups, or provides real-time biometric tracking (such as facial recognition).
High Risk: Includes anything "intended to be used as a security component of a product" or for specific applications such as critical infrastructure, education, legal/judicial affairs and employee recruitment.
At the same time, chatbots like ChatGPT, Bard, and Bing fall into the “limited risk” category.
The European Commission wrote in this agreement that AI should not be an “end” in itself, but a tool that serves human beings, with the ultimate goal of benefiting human beings . Therefore, AI rules in the EU market or other forms of AI that affect EU citizens should be "people-centered" and make people believe that the technology is used in a "safe and legal manner."
Models deemed to pose "systemic risks" will be subject to additional rules, according to the document. According to reports, the EU will determine whether there is a risk based on the computing power of the trained model, setting a threshold for the model of more than 10 trillion operations per second. Some experts say that the only model that can currently reach this threshold is OpenAI's GPT-4 .
In addition, the EU executive agency can also specify other thresholds based on possible indicators such as the size of the data set, whether there are at least 10,000 registered commercial users in the EU, and the final number of registered users .
The report also said that these "more capable" models should sign up to a code of conduct while the European Commission develops more coordinated and long-term effective control measures. These models must also meet the following requirements:
Proactively report your own energy consumption
Conduct red team/adversarial testing internally or externally
Assess and mitigate possible systemic risks and report any incidents
Ensure appropriate cybersecurity controls are used
Reports information used to fine-tune the model and its system structure
If more energy-efficient standards are developed, the development process should comply with the new standards
As for models that have not signed up to the code of conduct, they must prove to the European Commission that they will indeed comply with the AI Act. It’s important to note that the exemption for open source models does not apply to those “deemed to pose systemic risks.”