Article 61SSG Eric Schmidt Thinks AI Is As Powerful As Nukes

Eric Schmidt Thinks AI Is As Powerful As Nukes

by
BeauHD
from Slashdot on (#61SSG)
An anonymous reader quotes a report from Motherboard: Former Google CEO Eric Schmidt compared AI to nuclear weapons and called for a deterrence regime similar to the mutually-assured destruction that keeps the world's most powerful countries from destroying each other. Schmidt talked about the dangers of AI at the Aspen Security Forum at a panel on national security and artificial intelligence on July 22. While fielding a question about the value of morality in tech, Schmidt explained that he, himself, had been naive about the power of information in the early days of Google. He then called for tech to be better in line with the ethics and morals of the people it serves and made a bizarre comparison between AI and nuclear weapons. Schmidt imagined a near future where China and the U.S. needed to cement a treaty around AI. "In the 50s and 60s, we eventually worked out a world where there was a 'no surprise' rule about nuclear tests and eventually they were banned," Schmidt said. "It's an example of a balance of trust, or lack of trust, it's a 'no surprises' rule. I'm very concerned that the U.S. view of China as corrupt or Communist or whatever, and the Chinese view of America as failingwill allow people to say 'Oh my god, they're up to something,' and then begin some kind of conundrum. Begin some kind of thing where, because you're arming or getting ready, you then trigger the other side. We don't have anyone working on that and yet AI is that powerful." Schmidt imagined a near future where both China and the U.S. would have security concerns that force a kind of deterrence treaty between them around AI. He speaks of the 1950s and '60s when diplomacy crafted a series of controls around the most deadly weapons on the planet. But for the world to get to a place where it instituted the Nuclear Test Ban Treaty, SALT II, and other landmark pieces of legislation, it took decades of nuclear explosions and, critically, the destruction of Hiroshima and Nagasaki. The two Japanese cities America destroyed at the end of World War II killed tens of thousands of people and proved to the world the everlasting horror of nuclear weapons. The governments of Russia and China then rushed to acquire the weapons. The way we live with the possibility these weapons will be used is through something called mutual assured destruction (MAD), a theory of deterrence that ensures if one country launches a nuke, it's possible that every other country will too. We don't use the most destructive weapon on the planet because of the possibility that doing so will destroy, at the very least, civilization around the globe. "The problem with AI is not that it has the potentially world destroying force of a nuclear weapon," writes Motherboard's Matthew Gault. "It's that AI is only as good as the people who designed it and that they reflect the values of their creators. AI suffers from the classic 'garbage in, garbage out' problem: Racist algorithms make racist robots, all AI carries the biases of its creators, and a chatbot trained on 4chan becomes vile..." "AI is a reflection of its creator. It can't level a city in a 1.2 megaton blast. Not unless a human teaches it to do so."

twitter_icon_large.pngfacebook_icon_large.png

Read more of this story at Slashdot.

External Content
Source RSS or Atom Feed
Feed Location https://rss.slashdot.org/Slashdot/slashdotMain
Feed Title Slashdot
Feed Link https://slashdot.org/
Feed Copyright Copyright Slashdot Media. All Rights Reserved.
Reply 0 comments