Article 66BCH AI Experts Are Increasingly Afraid of What They're Creating

AI Experts Are Increasingly Afraid of What They're Creating

by
janrinok
from SoylentNews on (#66BCH)

Arthur T Knackerbracket writes:

https://www.vox.com/the-highlight/23447596/artificial-intelligence-agi-openai-gpt3-existential-risk-human-extinction

In 2018 at the World Economic Forum in Davos, Google CEO Sundar Pichai had something to say: "AI is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire." Pichai's comment was met with a healthy dose of skepticism. But nearly five years later, it's looking more and more prescient.

AI translation is now so advanced that it's on the brink of obviating language barriers on the internet among the most widely spoken languages. College professors are tearing their hair out because AI text generators can now write essays as well as your typical undergraduate - making it easy to cheat in a way no plagiarism detector can catch. AI-generated artwork is even winning state fairs. A new tool called Copilot uses machine learning to predict and complete lines of computer code, bringing the possibility of an AI system that could write itself one step closer. DeepMind's AlphaFold system, which uses AI to predict the 3D structure of just about every protein in existence, was so impressive that the journal Science named it 2021's Breakthrough of the Year.

You can even see it in the first paragraph of this story, which was largely generated for me by the OpenAI language model GPT-3.

While innovation in other technological fields can feel sluggish - as anyone waiting for the metaverse would know - AI is full steam ahead. The rapid pace of progress is feeding on itself, with more companies pouring more resources into AI development and computing power.

Of course, handing over huge sectors of our society to black-box algorithms that we barely understand creates a lot of problems, which has already begun to help spark a regulatory response around the current challenges of AI discrimination and bias. But given the speed of development in the field, it's long past time to move beyond a reactive mode, one where we only address AI's downsides once they're clear and present. We can't only think about today's systems, but where the entire enterprise is headed.

Read more of this story at SoylentNews.

External Content
Source RSS or Atom Feed
Feed Location https://soylentnews.org/index.rss
Feed Title SoylentNews
Feed Link https://soylentnews.org/
Feed Copyright Copyright 2014, SoylentNews
Reply 0 comments