OpenAI's GPT-4 Is Closed Source and Shrouded in Secrecy
OpenAI released a 98-page technical report on Tuesday to accompany its unveiling of its latest large language model, GPT-4. Among the hype surrounding the model's new capabilities, such as its ability to pass the bar exam, is growing criticism from AI researchers who point out that the paper is not transparent or "open" in any meaningful way.
The report, whose sole author is listed as the company rather than specific researchers, explicitly says, Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method."
This means that OpenAI did not disclose what it used to train the model or how it trained the model, including the energy costs and hardware used for it, making GPT-4 the company's most secretive release thus far. As Motherboard has noted before, this is a complete 180 from OpenAI's founding principles as a nonprofit, open-source entity.
AI researchers are warning about the potential consequences of withholding this information.
Ben Schmidt, the Vice President of Information Design at Nomic, tweeted that OpenAI's failure to share its datasets means it's impossible to evaluate whether the training sets have specific biases. To ameliorate those harms, and to make informed decisions about where a model should not be used, we need to know what kinds of biases are built in. OpenAI's choices make this impossible," he wrote.
After reading the almost 100-page report and system card about GPT-4, I have more questions than answers. And as a scientist, it's hard for me to rely upon results that I can't verify or replicate," Sasha Luccioni, a Research Scientist at Hugging Face, told Motherboard. Hugging Face is a company that provides open-source tools for building applications and training sets for machine learning.
It really bothers me that the human costs of this research (in terms of hours put in by human evaluators and annotators) as well as its environmental costs (in terms of the emissions generated by training these models) just get swept under the rug, and not disclosed as they should be," Luccioni said.
It's a product. Not science," Prithviraj Ammanabrolu, a researcher at the Allen Institute for Artificial Intelligence, tweeted.
I did think that calling that 98 page report a technical report' is misleading-as there were no technical details," Subbarao Kambhampati, a professor in the School of Computing & AI at Arizona State University, told Motherboard.
Emily M. Bender, a Professor of Linguistics at the University of Washington, tweeted that this secrecy did not come as a surprise to her. They are willfully ignoring the most basic risk mitigation strategies, all while proclaiming themselves to be working towards the benefit of humanity," she tweeted.
The decision to keep these details private reveals the company's complete shift from its founding principles, in which it declared that all researchers would be encouraged to share papers, blog posts, or code" and would be dedicated to research that advances humanity as a whole, unconstrained by a need to generate financial return." Now, OpenAI is a for-profit company. Microsoft's Bing was the first to use GPT-4, rather than the model being shared with the public first.
OpenAI's chief scientist Ilya Sutskever has been defending the company's decision in response to this backlash. Safety is not a binary thing; it is a process," Sutskever told MIT Technology Review. Things get complicated any time you reach a level of new capabilities."
It's competitive out there. GPT-4 is not easy to develop. It took pretty much all of OpenAI working together for a very long time to produce this thing. And there are many many companies who want to do the same thing, so from a competitive side, you can see this as a maturation of the field," Sutskever told The Verge.
He continued to say we were wrong" to have been open-source in the beginning. I fully expect that in a few years it's going to be completely obvious to everyone that open-sourcing AI is just not wise," Sutskever said.
Many researchers still do not buy Sutskever's response, claiming that OpenAI is interested in profiting from the technology. This makes no sense. They aren't doing anything anyone else with resources cannot do. They are just protecting their place in the market," Mark Riedl, a Professor at the School Of Interactive Computing at Georgia Tech, tweeted as a response.
William Falcon, CEO of Lightning AI and creator of an open-source Python library called PyTorch Lightning, told Venture Beat that OpenAI's paper was masquerading as research. Falcon said that although it's fair to want to prevent competitors from copying your model, OpenAI is following a Silicon Valley startup model, rather than one of academia, in which ethics matter.
If this model goes wrong, and it will, you've already seen it with hallucinations and giving you false information, how is the community supposed to react? How are ethical researchers supposed to go and actually suggest solutions and say, this way doesn't work, maybe tweak it to do this other thing?" Falcon asked.
It certainly gives you the sense that the research on these large models is entering a proprietary' phase, with other companies following OpenAI's example. The irony of the practice starting not with a company like Apple, but with a company originally formed with the express intention of openness' in AI research is quite rich," Kambhampati added.
So far, ChatGPT and Bing Chat, which uses GPT, have had problems. Microsoft Bing, which had been using GPT-4 since November 2022, was filled with misinformation and berated users. The new GPT-4 apparently has more guard rails, after being fine-tuned on human feedback from its Bing test run, but it is very unlikely that will prevent all potential problems. For example, researchers found that ChatGPT could be easily broken by entering strange words that came from a subreddit called counting. This is to say that keeping GPT-4 closed source serves OpenAI's interests in multiple ways: Competitors can't copy it, but ethical AI researchers and users also can't scrutinize it to point out obvious problems. Keeping it closed source doesn't mean those problems don't exist, it just means they'll remain hidden until people stumble on them or something goes amiss.
Keeping its training set secret also has the effect of making it more difficult for people to know whether their intellectual property and copyrighted work have been scraped.
It's hard to believe that 'competition' and 'safety' are the only reasons for OpenAI's secrecy, when hiding training data makes it harder to follow the anti-Stability playbook and sue them for appropriating other's work," Schmidt wrote, referring to a lawsuit that artists have waged against Stability AI for training Stable Diffusion on copyrighted images. Stable Diffusion was trained on LAION-5B, an open-source dataset, which resulted in the public being able to see if their own images were included in the dataset.
GPT-4's release is the latest volley from OpenAI in an AI arms race. Big tech companies like Google, Microsoft, and Meta are racing to create new AI technologies as fast as possible, often sidestepping or shrugging off ethical concerns along the way. Google announced on Wednesday that its language model PaLM would be launching an API for businesses and developers to use. Meanwhile, Microsoft cut an entire ethics and society team within its AI department, as part of its recent layoffs, leaving the company without a dedicated responsible AI team, while it continues to adopt OpenAI products as part of its business.