AI Pioneers Call for Protections Against 'Catastrophic Risks'
AI pioneers have issued a stark warning about the technology's potential risks, calling for urgent global oversight. At a recent meeting in Venice, scientists from around the world discussed the need for a coordinated international response to AI safety concerns. The group proposed establishing national AI safety authorities to monitor and register AI systems, which would collaborate to define red flags such as self-replication or intentional deception capabilities. The report adds: Scientists from the United States, China, Britain, Singapore, Canada and elsewhere signed the statement. Among the signatories was Yoshua Bengio, whose work is so often cited that he is called one of the godfathers of the field. There was Andrew Yao, whose course at Tsinghua University in Beijing has minted the founders of many of China's top tech companies. Geoffrey Hinton, a pioneering scientist who spent a decade at Google, participated remotely. All three are winners of the Turing Award, the equivalent of the Nobel Prize for computing. The group also included scientists from several of China's leading A.I. research institutions, some of which are state-funded and advise the government. A few former government officials joined, including Fu Ying, who had been a Chinese foreign ministry official and diplomat, and Mary Robinson, the former president of Ireland. Earlier this year, the group met in Beijing, where they briefed senior Chinese government officials on their discussion.
Read more of this story at Slashdot.