TechScape: The people charged with making sure AI doesn’t destroy humanity have left the building
If OpenAI can't keep its own team together, what hope is there for the rest of the industry? Plus, AI-generated slop' is taking over the internet
Don't get TechScape delivered to your inbox? Sign up for the full article here
Everything happens so much. I'm in Seoul for the International AI summit, the half-year follow-up to last year's Bletchley Park AI safety summit (the full sequel will be in Paris this autumn). While you read this, the first day of events will have just wrapped up - though, in keeping with the reduced fuss this time round, that was merely a virtual" leaders' meeting.
When the date was set for this summit - alarmingly late in the day for, say, a journalist with two preschool children for whom four days away from home is a juggling act - it was clear that there would be a lot to cover. The hot AI summer is upon us:
The inaugural AI safety summit at Bletchley Park in the UK last year announced an international testing framework for AI models, after calls ... for a six-month pause in development of powerful systems.
There has been no pause. The Bletchley declaration, signed by UK, US, EU, China and others, hailed the enormous global opportunities" from AI but also warned of its potential for causing catastrophic" harm. It also secured a commitment from big tech firms including OpenAI, Google and Mark Zuckerberg's Meta to cooperate with governments on testing their models before they are released.
A former senior employee at OpenAI has said the company behind ChatGPT is prioritising shiny products" over safety, revealing that he quit after a disagreement over key aims reached breaking point".
Leike detailed the reasons for his departure in a thread on X posted on Friday, in which he said safety culture had become a lower priority. Over the past years, safety culture and processes have taken a backseat to shiny products," he wrote.
I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.
If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars. One former employee, Daniel Kokotajlo, who posted that he quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI", has confirmed publicly that he had to surrender what would have likely turned out to be a huge sum of money in order to quit without signing the document.
Slop" is what you get when you shove artificial intelligence-generated material up on the web for anyone to view.
Unlike a chatbot, the slop isn't interactive, and is rarely intended to actually answer readers' questions or serve their needs.
Continue reading...