Article 6C7HR AI is already causing unintended harm. What happens when it falls into the wrong hands? | David Evan Harris

AI is already causing unintended harm. What happens when it falls into the wrong hands? | David Evan Harris

by
David Evan Harris
from Technology | The Guardian on (#6C7HR)

Meta, where I used to work, is developing powerful tools. I'm worried about what could happen if they're picked up by malicious actors

A researcher was granted access earlier this year by Facebook's parent company, Meta, to incredibly potent artificial intelligence software - and leaked it to the world. As a former researcher on Meta's civic integrity and responsible AI teams, I am terrified by what could happen next.

Though Meta was violated by the leak, it came out as the winner: researchers and independent coders are now racing to improve on or build on the back of LLaMA (Large Language Model Meta AI - Meta's branded version of a large language model or LLM, the type of software underlying ChatGPT), with many sharing their work openly with the world.

Continue reading...
External Content
Source RSS or Atom Feed
Feed Location http://feeds.theguardian.com/theguardian/technology/rss
Feed Title Technology | The Guardian
Feed Link https://www.theguardian.com/us/technology
Feed Copyright Guardian News and Media Limited or its affiliated companies. All rights reserved. 2024
Reply 0 comments