Humanity At Risk From AI 'Race To the Bottom,' Says MIT Tech Expert
An anonymous reader quotes a report from The Guardian: Max Tegmark, a professor of physics and AI researcher at the Massachusetts Institute of Technology, said the world was "witnessing a race to the bottom that must be stopped." Tegmark organized an open letter published in April, signed by thousands of tech industry figures including Elon Musk and the Apple co-founder Steve Wozniak, that called for a six-month hiatus on giant AI experiments. "We're witnessing a race to the bottom that must be stopped," Tegmark told the Guardian. "We urgently need AI safety standards, so that this transforms into a race to the top. AI promises many incredible benefits, but the reckless and unchecked development of increasingly powerful systems, with no oversight, puts our economy, our society, and our lives at risk. Regulation is critical to safe innovation, so that a handful of AI corporations don't jeopardize our shared future." In a policy document published this week, 23 AI experts, including two modern "godfathers" of the technology, said governments must be allowed to halt development of exceptionally powerful models. Gillian Hadfield, a co-author of the paper and the director of the Schwartz Reisman Institute for Technology and Society at the University of Toronto, said AI models were being built over the next 18 months that would be many times more powerful than those already in operation. "There are companies planning to train models with 100x more computation than today's state of the art, within 18 months," she said. "No one knows how powerful they will be. And there's essentially no regulation on what they'll be able to do with these models." The paper, whose authors include Geoffrey Hinton and Yoshua Bengio -- two winners of the ACM Turing award, the "Nobel prize for computing" -- argues that powerful models must be licensed by governments and, if necessary, have their development halted. "For exceptionally capable future models, eg models that could circumvent human control, governments must be prepared to license their development, pause development in response to worrying capabilities, mandate access controls, and require information security measures robust to state-level hackers, until adequate protections are ready." The unrestrained development of artificial general intelligence, the term for a system that can carry out a wide range of tasks at or above human levels of intelligence, is a key concern among those calling for tighter regulation. Further reading: AI Risk Must Be Treated As Seriously As Climate Crisis, Says Google DeepMind Chief
Read more of this story at Slashdot.