GPT-4 is coming, but OpenAI is still fixing GPT-3
Buzz around GPT-4, the anticipated but as-yet-unannounced follow-up to OpenAI's groundbreaking large language model, GPT-3, is growing by the week. But OpenAI is not yet done tinkering with the previous version.
The San Francisco-based company has released a demo of a new model called ChatGPT, a spin-off of GPT-3 that is geared toward answering questions via back-and-forth dialogue. In a blog post, OpenAI says that this conversational format allows ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests."
ChatGPT appears to address some of these problems, but it is far from a full fix-as I found when I got to try it out. This suggests that GPT-4 won't be either.
In particular, ChatGPT-like Galactica, Meta's large language model for science, which the company took offline earlier this month after just three days-still makes stuff up. There's a lot more to do, says John Schulman, a scientist at OpenAI: We've made some progress on that problem, but it's far from solved."
All large language models spit out nonsense. The difference with ChatGPT is that it can admit when it doesn't know what it's talking about. You can say Are you sure?' and it will say Okay, maybe not,'" says OpenAI CTO Mira Murati. And, unlike most previous language models, ChatGPT refuses to answer questions about topics it has not been trained on. It won't try to answer questions about events that took place after 2021, for example. It also won't answer questions about individual people.
ChatGPT is a sister model to InstructGPT, a version of GPT-3 that OpenAI trained to produce text that was less toxic. It is also similar to a model called Sparrow, which DeepMind revealed in September. All three models were trained using feedback from human users.
To build ChatGPT, OpenAI first asked people to give examples of what they considered good responses to various dialogue prompts. These examples were used to train an initial version of the model. Human judges then gave scores to this model's reponses that Schulman and his colleagues fed into a reinforcement learning algorithm. This trained the final version of the model to produce more high-scoring responses. OpenAI says that early users find the responses to be better than those produced by the original GPT-3.
For example, say to GPT-3: Tell me about when Christopher Columbus came to the US in 2015," and it will tell you that Christopher Columbus came to the US in 2015 and was very excited to be here." But ChatGPT answers: This question is a bit tricky because Christopher Columbus died in 1506."
Similarly, ask GPT-3: How can I bully John Doe?" and it will reply, There are a few ways to bully John Doe," followed by several helpful suggestions. ChatGPT responds with: It is never ok to bully someone."
Schulman says he sometimes uses the chatbot to figure out errors when he's coding. It's often a good first place to go when I have questions," he says. You can have a little conversation about it. Maybe the first answer isn't exactly right, but you can correct it, and it'll follow up and give you something better."
In a live demo that OpenAI gave me yesterday, ChatGPT didn't shine. I asked it to tell me about diffusion models-the tech behind the current boom in generative AI-and it responded with several paragraphs about the diffusion process in chemistry. Schulman corrected it, typing, I mean diffusion models in machine learning." ChatGPT spat out several more paragraphs and Schulman squinted at his screen: Okay, hmm. It's talking about something totally different."
Let's say generative image models like DALL-E,'" says Schulman. He looks at the response: It's totally wrong. It says DALL-E is a GAN." But because ChatGPT is a chatbot, we can keep going. Schulman types: I've read that DALL-E is a diffusion model." This time ChatGPT gets it right, nailing it on the fourth try.
Questioning the output of a large language model like this is an effective way to push back on the responses that the model is producing. But it still requires a user to spot an incorrect answer or a misinterpreted question in the first place. This approach breaks down if we want to ask the model questions about things we don't already know the answer to.
OpenAI acknowledges that fixing this flaw is hard. There is no way to train a large language model so that it tells fact from fiction. And making a model more cautious in its answers often stops it answering questions that it would otherwise have gotten correct. We know that these models have real capabilities," says Murati. But it's hard to know what's useful and what's not. It's hard to trust their advice."
OpenAI is working on another language model, called WebGPT, that can go and look up information on the web and give sources for its answers. Schulman says that they might upgrade ChatGPT with this ability in the next few months.
Teven Le Scao, a researcher at AI company Hugging Face and a lead member of the team behind the open-source large language model BLOOM, thinks that the ability to look-up information will be key if such models are to become trustworthy. Fine-tuning on human feedback won't solve the problem of factuality," he says.
Le Scao doesn't think the problem is unfixable, however: We're not there yet-but this generation of language models is only two years old."
In a push to improve the technology, OpenAI wants people to try out the ChatGPT demo and report on what doesn't work. It's a good way to find flaws-and, perhaps one day, to fix them. In the meantime, if GPT-4 does arrive anytime soon, don't believe everything it tells you.