Article 6KFGZ AI Researchers Have Started Reviewing Their Peers Using AI Assistance

AI Researchers Have Started Reviewing Their Peers Using AI Assistance

by
BeauHD
from Slashdot on (#6KFGZ)
Academics in the artificial intelligence field have started using generative AI services to help them review the machine learning work of their peers. In a new paper on arXiv, researchers analyzed the peer reviews of papers submitted to leading AI conferences, including ICLR 2024, NeurIPS 2023, CoRL 2023 and EMNLP 2023. The Register reports on the findings: The authors took two sets of data, or corpora -- one written by humans and the other one written by machines. And they used these two bodies of text to evaluate the evaluations -- the peer reviews of conference AI papers -- for the frequency of specific adjectives. "[A]ll of our calculations depend only on the adjectives contained in each document," they explained. "We found this vocabulary choice to exhibit greater stability than using other parts of speech such as adverbs, verbs, nouns, or all possible tokens." It turns out LLMs tend to employ adjectives like "commendable," "innovative," and "comprehensive" more frequently than human authors. And such statistical differences in word usage have allowed the boffins to identify reviews of papers where LLM assistance is deemed likely. "Our results suggest that between 6.5 percent and 16.9 percent of text submitted as peer reviews to these conferences could have been substantially modified by LLMs, i.e. beyond spell-checking or minor writing updates," the authors argued, noting that reviews of work in the scientific journal Nature do not exhibit signs of mechanized assistance. Several factors appear to be correlated with greater LLM usage. One is an approaching deadline: The authors found a small but consistent increase in apparent LLM usage for reviews submitted three days or less before the deadline. The researchers emphasized that their intention was not to pass judgment on the use of AI writing assistance, nor to claim that any of the papers they evaluated were written completely by an AI model. But they argued the scientific community needs to be more transparent about the use of LLMs. And they contended that such practices potentially deprive those whose work is being reviewed of diverse feedback from experts. What's more, AI feedback risks a homogenization effect that skews toward AI model biases and away from meaningful insight.

twitter_icon_large.pngfacebook_icon_large.png

Read more of this story at Slashdot.

External Content
Source RSS or Atom Feed
Feed Location https://rss.slashdot.org/Slashdot/slashdotMain
Feed Title Slashdot
Feed Link https://slashdot.org/
Feed Copyright Copyright Slashdot Media. All Rights Reserved.
Reply 0 comments