Advertisement
Advertisement
European Union
Get more with myNEWS
A personalised news feed of stories that matter to you
Learn more
The EU has announced that it is the first continent to set rules on the use of AI. Photo: Shutterstock

Europe reaches deal on world’s first comprehensive AI rules

  • EU negotiators have signed a tentative political agreement for the Artificial Intelligence Act, paving the way for oversight of services like ChatGPT
  • The US, UK, China and global coalitions like the G7 have jumped in with their own proposals to regulate AI, though they are still catching up to Europe

European Union negotiators clinched a deal on Friday on the world’s first comprehensive artificial intelligence rules, paving the way for legal oversight of technology used in popular generative AI services like ChatGPT that has promised to transform everyday life and spurred warnings of existential dangers to humanity.

Negotiators from the European Parliament and the bloc’s 27 member countries overcame big differences on controversial points including generative AI and police use of facial recognition surveillance to sign a tentative political agreement for the Artificial Intelligence Act.

“Deal!” tweeted European Commissioner Thierry Breton, just before midnight. “The EU becomes the very first continent to set clear rules for the use of AI.”

It came after marathon closed-door talks this week, with one session lasting 22 hours before a second round kicked off Friday morning.

Officials provided scant details on what exactly will make it into the eventual law, which would not take effect until 2025 at the earliest. They were under the gun to secure a political victory for the flagship legislation but were expected to leave the door open to further talks to work out the fine print, likely to bring more back room lobbying.

The EU took an early lead in the global race to draw up AI guardrails when it unveiled the first draft of its rule book in 2021. The recent boom in generative AI, however, sent European officials scrambling to update a proposal poised to serve as a blueprint for the world.

The European Parliament will still need to vote on it early next year, but with the deal done that is a formality, Brando Benifei said late on Friday.

“It’s very very good,” he said by text after being asked if it included everything he wanted. “Obviously we had to accept some compromises but overall very good.”

OpenAI applies for GPT-6, GPT-7 trademarks in China

Generative AI systems like OpenAI’s ChatGPT have exploded into the world’s consciousness, dazzling users with the ability to produce humanlike text, photos and songs but raising fears about the risks the rapidly developing technology poses to jobs, privacy and copyright protection and even human life itself.

Now, the US, UK, China and global coalitions like the Group of 7 major democracies have jumped in with their own proposals to regulate AI, though they are still catching up to Europe.

Once the final version of the EU’s AI Act is worked out, the text needs approval from the bloc’s 705 lawmakers before they break up for EU-wide elections next year. That vote is expected to be a formality.

The AI Act was originally designed to mitigate the dangers from specific AI functions based on their level of risk, from low to unacceptable. But lawmakers pushed to expand it to foundation models, the advanced systems that underpin general purpose AI services like ChatGPT and Google’s Bard chatbot.

05:03

How does China’s AI stack up against ChatGPT?

How does China’s AI stack up against ChatGPT?

Foundation models looked set to be one of the biggest sticking points for Europe. However, negotiators managed to reach a tentative compromise early in the talks, despite opposition led by France, which called instead for self-regulation to help home-grown European generative AI companies competing with big US rivals including OpenAI’s backer, Microsoft.

Also known as large language models, these systems are trained on vast troves of written works and images scraped off the internet. They give generative AI systems the ability to create something new unlike traditional AI, which processes data and completes tasks using predetermined rules.

Under the deal, the most advanced foundation models that pose the biggest “systemic risks” will get extra scrutiny, including requirements to disclose more information such as how much computing power was used to train the systems.

Researchers have warned that these powerful foundation models, built by a handful of big tech companies, could be used to supercharge online disinformation and manipulation, cyberattacks or creation of bioweapons.

AI poses ‘extinction’ risk comparable to nuclear war, pandemics: experts

Rights groups also caution that the lack of transparency about data used to train the models poses risks to daily life because they act as basic structures for software developers building AI-powered services.

What became the thorniest topic was AI-powered facial recognition surveillance systems, and negotiators found a compromise after intensive bargaining.

European lawmakers wanted a full ban on public use of facial scanning and other “remote biometric identification” systems because of privacy concerns while governments of member countries wanted exemptions so law enforcement could use them to tackle serious crimes like child sexual exploitation or terrorist attacks.

Post