OpenAI Has Released the Largest Version Yet of its Fake-News-Spewing AI
upstart writes:
Submitted via IRC for SoyCow2718
OpenAI has released the largest version yet of its fake-news-spewing AI
In February OpenAI catapulted itself into the public eye when it produced a language model so good at generating fake news that the organization decided not to release it. Some within the AI research community argued it was a smart precaution; others wrote it off as a publicity stunt. The lab itself, a small San Francisco-based for-profit that seeks to create artificial general intelligence, has firmly held that it is an important experiment in how to handle high-stakes research.
Now six months later, the policy team has published a paper examining the impact of the decision thus far. Alongside it, the lab has released a version of the model, known as GPT-2, that's half the size of the full one, which has still not been released.
In May, a few months after GPT-2's initial debut, OpenAI revised its stance on withholding the full code to what it calls a "staged release"-the staggered release of incrementally larger versions of the model in a ramp-up to the full one. In February, it published a version of the model that was merely 8% of the size of the full one. It published another roughly a quarter of the full version before the most recent release. During this process, it also partnered with selected research institutions to study the full model's implications.
[...] The authors concluded that after careful monitoring, OpenAI had not yet found any attempts of malicious use but had seen multiple beneficial applications, including in code autocompletion, grammar help, and developing question-answering systems for medical assistance. As a result, the lab felt that releasing the most recent code was ultimately more beneficial. Other researchers argue that several successful efforts to replicate GPT-2 have made OpenAI's withholding of the code moot anyway.
OpenAI Can No Longer Hide Its Alarmingly Good Robot 'Fake News' Writer
But it may not ultimately be up to OpenAI. This week, Wired magazine reported that two young computer scientists from Brown University-Aaron Gokaslan, 23, and Vanya Cohen, 24-had published what they called a recreation of OpenAI's (shelved) original GPT-2 software on the internet for anyone to download. The pair said their work was to prove that creating this kind of software doesn't require an expensive lab like OpenAI (backed by $2 billion in endowment and corporate dollars). They also don't believe such a software would cause imminent danger to society.
Also at BBC.
See also: Elon Musk: Computers will surpass us 'in every single way'
Previously: OpenAI Develops Text-Generating Algorithm, Considers It Too Dangerous to Release
Read more of this story at SoylentNews.