Certain Names Make ChatGPT Grind to a Halt, and We Know Why
Freeman writes:
OpenAI's ChatGPT is more than just an AI language model with a fancy interface. It's a system consisting of a stack of AI models and content filters that make sure its outputs don't embarrass OpenAI or get the company into legal trouble when its bot occasionally makes up potentially harmful facts about people.
Recently, that reality made the news when people discovered that the name "David Mayer" breaks ChatGPT. 404 Media also discovered that the names "Jonathan Zittrain" and "Jonathan Turley" caused ChatGPT to cut conversations short. And we know another name, likely the first, that started the practice last year: Brian Hood.
[...] When asked about these names, ChatGPT responds with "I'm unable to produce a response" or "There was an error generating a response" before terminating the chat session, according to Ars' testing.
[...] ChatGPT-breaking names found so far through a communal effort taking place on [social media] and [Reddit].
- Brian Hood
- Jonathan Turley
- Jonathan Zittrain
- David Faber
- Guido Scorza
[...] We first discovered that ChatGPT choked on the name "Brian Hood" in mid-2023 while writing about his defamation lawsuit. In that lawsuit, the Australian mayor threatened to sue OpenAI after discovering ChatGPT falsely claimed he had been imprisoned for bribery when, in fact, he was a whistleblower who had exposed corporate misconduct.
Read more of this story at SoylentNews.