ChatGPT 'Not Particularly Innovative' and 'Nothing Revolutionary', Says Meta's Chief AI Scientist
Much ink has been spilled of late about the tremendous promise of OpenAI's ChatGPT program for generating natural-language utterances in response to human prompts. The program strikes many people as so fresh and intriguing that ChatGPT must be unique in the universe. Scholars of AI beg to differ. From a report: "In terms of underlying techniques, ChatGPT is not particularly innovative," said Yann LeCun, Meta's chief AI scientist, in a small gathering of press and executives on Zoom last week. "It's nothing revolutionary, although that's the way it's perceived in the public," said LeCun. "It's just that, you know, it's well put together, it's nicely done." Such data-driven AI systems have been built in the past by many companies and research labs, said LeCun. The idea of OpenAI being alone in its type of work is inaccurate, he said. "OpenAI is not particularly an advance compared to the other labs, at all," said LeCun. "It's not only just Google and Meta, but there are half a dozen startups that basically have very similar technology to it," added LeCun. "I don't want to say it's not rocket science, but it's really shared, there's no secret behind it, if you will." LeCun noted the many ways in which ChatGPT, and the program upon which it builds, OpenAI's GPT-3, is composed of multiple pieces of technology developed over many years by many parties. "You have to realize, ChatGPT uses Transformer architectures that are pre-trained in this self-supervised manner," observed LeCun. "Self-supervised-learning is something I've been advocating for a long time, even before OpenAI existed," he said. "Transformers is a Google invention," noted LeCun, referring to the language neural net unveiled by Google in 2017, which has become the basis for a vast array of language programs, including GPT-3. The work on such language programs goes back decades, said LeCun.
Read more of this story at Slashdot.