Europe Reaches a Deal On the World's First Comprehensive AI Rules
An anonymous reader quotes a report from the Associated Press: European Union negotiators clinched a deal Friday on the world's first comprehensive artificial intelligence rules, paving the way for legal oversight of technology used in popular generative AI services like ChatGPT that has promised to transform everyday life and spurred warnings of existential dangers to humanity. Negotiators from the European Parliament and the bloc's 27 member countries overcame big differences on controversial points including generative AI and police use of facial recognition surveillance to sign a tentative political agreement for the Artificial Intelligence Act. "Deal!" tweeted European Commissioner Thierry Breton, just before midnight. "The EU becomes the very first continent to set clear rules for the use of AI." The result came after marathon closed-door talks this week, with one session lasting 22 hours before a second round kicked off Friday morning. Officials provided scant details on what exactly will make it into the eventual law, which wouldn't take effect until 2025 at the earliest. They were under the gun to secure a political victory for the flagship legislation but were expected to leave the door open to further talks to work out the fine print, likely to bring more backroom lobbying. The AI Act was originally designed to mitigate the dangers from specific AI functions based on their level of risk, from low to unacceptable. But lawmakers pushed to expand it to foundation models, the advanced systems that underpin general purpose AI services like ChatGPT and Google's Bard chatbot. Foundation models looked set to be one of the biggest sticking points for Europe. However, negotiators managed to reach a tentative compromise early in the talks, despite opposition led by France, which called instead for self-regulation to help homegrown European generative AI companies competing with big U.S rivals including OpenAI's backer Microsoft. [...] Under the deal, the most advanced foundation models that pose the biggest "systemic risks" will get extra scrutiny, including requirements to disclose more information such as how much computing power was used to train the systems.
Read more of this story at Slashdot.