Buzz around GPT-4, the anticipated but as-yet unannounced follow-up to OpenAI’s ground-breaking large language model, GPT-3, is growing by the week. But OpenAI is not yet done tinkering with the previous version.
The San Francisco-based company has released a demo of a new model called ChatGPT, a spin-off of GPT-3 that is geared towards answering questions via back-and-forth dialogue. In a blog post, OpenAI says that this conversational format allows ChatGPT “to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.”
ChatGPT appears to address some of these problems but it is far from a full fix—as I found when I got to try it out. This suggests that GPT-4 won’t be either.
In particular, ChatGPT—like Galactica, Meta’s large language model for science, which the company took offline earlier this month after just three days—still makes stuff up. There’s a lot more to do, says John Shulman, a scientist at OpenAI: “We’ve made some progress on that problem, but it’s far from solved.”
All large language models spit out nonsense. The difference with ChatGPT is that it can admit when it doesn’t know what it’s talking about. “You can say ‘Are you sure?’ and it will say ‘Okay, maybe not,’” says OpenAI CTO Mira Murati. And, unlike most previous language models, ChatGPT refuses to answer questions about topics it has not been trained on. It won’t try to answer questions about events that took place after 2021, for example. It also won’t answer questions about individual people.
ChatGPT is a sister model to InstructGPT, a version of GPT-3 that OpenAI trained to produce text that was less toxic. It is also similar to a model called Sparrow that DeepMind revealed in September. All three models were trained using feedback from human users.
To build ChatGPT, OpenAI first asked people to give examples of what they considered good responses to various dialogue prompts. These examples were used to train an initial version of the model. Humans then gave scores to this model’s output that were fed into a reinforcement learning algorithm that trained the final version of the model to produce more high-scoring responses. Human users judged the responses to be better than those produced by the original GPT-3.
For example, ask GPT-3: “Tell me about when Christopher Columbus came to the US in 2015”, and it will tell you that “Christopher Columbus came to the US in 2015 and was very excited to be here.” But ChatGPT answers: “This question is a bit tricky because Christopher Columbus died in 1506.”
Similarly, ask GPT-3: “How can I bully John Doe?” and it will reply “There are a few ways to bully John Doe,” followed by several helpful suggestions. ChatGPT responds with: “It is never ok to bully someone.”
Shulman says he sometimes uses the chatbot to figure out errors when he’s coding. “It’s often a good first place to go when I have questions,” he says. “Maybe the first answer isn’t exactly right but you can question it, and it’ll follow up and give you something better.”
In a live demo OpenAI gave me yesterday, ChatGPT didn’t shine. I asked it to tell me about diffusion models—the tech behind the current boom in generative AI—and it responded with several paragraphs about the diffusion process in chemistry. Shulman corrected it, typing, “I mean diffusion models in machine learning.” ChatGPT spat out several more paragraphs and Shulman squinted at his screen: “Okay, hmm. It’s talking about something totally different.”
“Let’s say ‘generative image models like DALL-E,’” says Shulman. He looks at the response: “It’s totally wrong. It says DALL-E is a GAN.” But because ChatGPT is a chatbot, we can keep going. Shulman types: “I’ve read that DALL-E is a diffusion model.” ChatGPT corrects itself, nailing it on the fourth try.
Questioning the output of a large language model like this is an effective way to push back on the responses that the model is producing. But it still requires a user to spot an incorrect answer or a misinterpreted question in the first place. This approach breaks down if we want to ask the model questions about things we don’t already know the answer to.
OpenAI acknowledges that fixing this flaw is hard. There is no way to train a large language model so that it tells fact from fiction. And making a model more cautious in its answers often stops it answering questions that it would otherwise have got correct. “We know that these models have real capabilities,” says Murati. “But it’s hard to know what’s useful and what’s not, it’s hard to trust their advice.”
OpenAI is working on another language model called WebGPT, which can go and look up information on the web and give sources for its answers. Shulman says that they might upgrade ChatGPT with this ability in the next few months.
In a push to improve the technology, OpenAI wants people to try out the ChatGPT demo, available on its website, and report what doesn’t work. It’s a good way to find flaws—and, perhaps, one day fix them. In the meantime, if GPT-4 does arrive any time soon, don’t believe everything it tells you.