Article 608J1 Are Unfriendly AI the Biggest Risk to Humanity?

Are Unfriendly AI the Biggest Risk to Humanity?

by
EditorDavid
from Slashdot on (#608J1)
"Ethereum creator Vitalik Buterin believes that unfriendly artificial intelligence poses the biggest risk to humanity..." reports a recent article from Benzinga:[In a tweet] Buterin shared a paper by AI theorist and writer Eliezer Yudkowsky that made a case for why the current research community isn't doing enough to prevent a potential future catastrophe at the hands of artificially generate intelligence. [The paper's title? "AGI Ruin: A List of Lethalities."] When one of Buterin's Twitter followers suggested that World War 3 is likely a bigger risk at the moment, the Ethereum co-founder disagreed. "Nah, WW3 may kill 1-2b (mostly from food supply chain disruption) if it's really bad, it won't kill off humanity. A bad AI could truly kill off humanity for good."

twitter_icon_large.pngfacebook_icon_large.png

Read more of this story at Slashdot.

External Content
Source RSS or Atom Feed
Feed Location https://rss.slashdot.org/Slashdot/slashdotMain
Feed Title Slashdot
Feed Link https://slashdot.org/
Feed Copyright Copyright Slashdot Media. All Rights Reserved.
Reply 0 comments