Scholars sneaking phrases into papers to fool AI reviewers by Thomas Claburn from The Register on 2025-07-07 22:03 (#6YFXH) Using prompt injections to play a Jedi mind trick on LLMs A handful of international computer science researchers appear to be trying to influence AI reviews with a new class of prompt injection attack....