Use of AI Is Seeping Into Academic Journals - and It's Proving Difficult To Detect
The rapid rise of generative AI has stoked anxieties across disciplines. High school teachers and college professors are worried about the potential for cheating. News organizations have been caught with shoddy articles penned by AI. And now, peer-reviewed academic journals are grappling with submissions in which the authors may have used generative AI to write outlines, drafts, or even entire papers, but failed to make the AI use clear. Wired: Journals are taking a patchwork approach to the problem. The JAMA Network, which includes titles published by the American Medical Association, prohibits listing artificial intelligence generators as authors and requires disclosure of their use. The family of journals produced by Science does not allow text, figures, images, or data generated by AI to be used without editors' permission. PLOS ONE requires anyone who uses AI to detail what tool they used, how they used it, and ways they evaluated the validity of the generated information. Nature has banned images and videos that are generated by AI, and it requires the use of language models to be disclosed. Many journals' policies make authors responsible for the validity of any information generated by AI. Experts say there's a balance to strike in the academic world when using generative AI -- it could make the writing process more efficient and help researchers more clearly convey their findings. But the tech -- when used in many kinds of writing -- has also dropped fake references into its responses, made things up, and reiterated sexist and racist content from the internet, all of which would be problematic if included in published scientific writing. If researchers use these generated responses in their work without strict vetting or disclosure, they raise major credibility issues. Not disclosing use of AI would mean authors are passing off generative AI content as their own, which could be considered plagiarism. They could also potentially be spreading AI's hallucinations, or its uncanny ability to make things up and state them as fact.
Read more of this story at Slashdot.