TechScape: Why is the UK so slow to regulate AI?
Britain has announced 10m for regulators but has done very little to mitigate the risks linked with artificial intelligence. Plus, Facebook's deep-fake Biden conundrum
Don't get TechScape delivered to your inbox? Sign up for the full article here
Britain wants to lead the world in AI regulation. But AI regulation is a rapidly evolving, contested policy space in which there's little agreement over what a good outcome would look like, let alone the best methods to get there. And being the third most important hub of AI research in the world doesn't give you an awful lot of power when the first two are the US and China.
How to slice through this Gordian knot? Simple: move swiftly and decisively to do ... absolutely nothing.
The government will acknowledge on Tuesday that binding measures for overseeing cutting-edge AI development are needed at some point - but not immediately. Instead, ministers will set out initial thinking for future binding requirements" for advanced systems and discuss them with technical, legal and civil society experts.
The government will also give 10m to regulators to help them tackle AI risks, as well as requiring them to set out their approach to the technology by 30 April.
The Intellectual Property Office, the UK government's agency overseeing copyright laws, has been consulting with AI companies and rights holders to produce guidance on text and data mining, where AI models are trained on existing materials such as books and music.
However, the group of industry executives convened by the IPO that oversees the work has been unable to agree on a voluntary code of practice, meaning that it has returned the responsibility back to officials at the Department for Science Innovation and Technology.
Meta's oversight board has found that a Facebook video wrongfully suggesting that the US president, Joe Biden, is a paedophile does not violate the company's current rules while deeming those rules incoherent" and too narrowly focused on AI-generated content.
The board, which is funded by Meta - Facebook's parent company - but run independently, took on the Biden video case in October in response to a user complaint about an altered seven-second video of the president.
Continue reading...