AI is already causing unintended harm. What happens when it falls into the wrong hands? | David Evan Harris
Meta, where I used to work, is developing powerful tools. I'm worried about what could happen if they're picked up by malicious actors
A researcher was granted access earlier this year by Facebook's parent company, Meta, to incredibly potent artificial intelligence software - and leaked it to the world. As a former researcher on Meta's civic integrity and responsible AI teams, I am terrified by what could happen next.
Though Meta was violated by the leak, it came out as the winner: researchers and independent coders are now racing to improve on or build on the back of LLaMA (Large Language Model Meta AI - Meta's branded version of a large language model or LLM, the type of software underlying ChatGPT), with many sharing their work openly with the world.
Continue reading...