Bill Gates Calls AI's Risks 'Real But Manageable'
This week Bill Gates said "there are more reasons than not to be optimistic that we can manage the risks of AI while maximizing their benefits."One thing that's clear from everything that has been written so far about the risks of AI - and a lot has been written - is that no one has all the answers. Another thing that's clear to me is that the future of AI is not as grim as some people think or as rosy as others think. The risks are real, but I am optimistic that they can be managed. As I go through each concern, I'll return to a few themes: - Many of the problems caused by AI have a historical precedent. For example, it will have a big impact on education, but so did handheld calculators a few decades ago and, more recently, allowing computers in the classroom. We can learn from what's worked in the past. - Many of the problems caused by AI can also be managed with the help of AI. - We'll need to adapt old laws and adopt new ones - just as existing laws against fraud had to be tailored to the online world. Later Gates adds that "we need to move fast. Governments need to build up expertise in artificial intelligence so they can make informed laws and regulations that respond to this new technology." But Gates acknowledged and then addressed several specific threats:He thinks AI can be taught to recognize its own hallucinations. "OpenAI, for example, is doing promising work on this front.Gates also believes AI tools can be used to plug AI-identified security holes and other vulnerabilities - and does not see an international AI arms race. "Although the world's nuclear nonproliferation regime has its faults, it has prevented the all-out nuclear war that my generation was so afraid of when we were growing up. Governments should consider creating a global body for AI similar to the International Atomic Energy Agency."He's "guardedly optimistic" about the dangers of deep fakes because "people are capable of learning not to take everything at face value" - and the possibility that AI "can help identify deepfakes as well as create them. Intel, for example, has developed a deepfake detector, and the government agency DARPA is working on technology to identify whether video or audio has been manipulated.""It is true that some workers will need support and retraining as we make this transition into an AI-powered workplace. That's a role for governments and businesses, and they'll need to manage it well so that workers aren't left behind - to avoid the kind of disruption in people's lives that has happened during the decline of manufacturing jobs in the United States."Gates ends with this final thought: "I encourage everyone to follow developments in AI as much as possible. It's the most transformative innovation any of us will see in our lifetimes, and a healthy public debate will depend on everyone being knowledgeable about the technology, its benefits, and its risks. "The benefits will be massive, and the best reason to believe that we can manage the risks is that we have done it before."
Read more of this story at Slashdot.