Academics Accuse AI Startups of Co-Opting Peer Review for Publicity
upstart writes:
Academics accuse AI startups of co-opting peer review for publicity:
There's a controversy brewing over "AI-generated" studies submitted to this year's ICLR, a long-running academic conference focused on AI.
At least three AI labs - Sakana, Intology, and Autoscience - claim to have used AI to generate studies that were accepted to ICLR workshops. At conferences like ICLR, workshop organizers typically review studies for publication in the conference's workshop track.
Sakana informed ICLR leaders before it submitted its AI-generated papers and obtained the peer reviewers' consent. The other two labs - Intology and Autoscience - did not, an ICLR spokesperson confirmed to TechCrunch.
Several AI academics took to social media to criticize Intology and Autoscience's stunts as a co-opting of the scientific peer review process.
"All these AI scientist papers are using peer-reviewed venues as their human evals, but no one consented to providing this free labor," wrote Prithviraj Ammanabrolu, an assistant computer science professor at UC San Diego, in an X post. "It makes me lose respect for all those involved regardless of how impressive the system is. Please disclose this to the editors."
As the critics noted, peer review is a time-consuming, labor-intensive, and mostly volunteer ordeal. According to one recent Nature survey, 40% of academics spend two to four hours reviewing a single study. That work has been escalating. The number of papers submitted to the largest AI conference, NeurIPS, grew to 17,491 last year, up 41% from 12,345 in 2023.
Academia already had an AI-generated copy problem. One analysis found that between 6.5% and 16.9% of papers submitted to AI conferences in 2023 likely contained synthetic text. But AI companies using peer review to effectively benchmark and advertise their tech is a relatively new occurrence.
Read more of this story at SoylentNews.