
Research repository ArXiv will ban authors for a year if they let AI do all the work
ArXiv, a widely used open repository for preprint research, is doing more to crack down on the careless use of large language models in scientific papers. Although papers are posted to the site before they are peer-reviewed, arXiv (pronounced “archive”) has become one of the main ways that research circulates in fields like computer science and math, and the site itself has becomea source of data on trends in scientific research. ArXiv has already taken steps to combat a growing number of low-quality, AI-generated papers, for example by requiring first-time posters toget an endorsement from an established author. And after being hosted by Cornell for more than 20 years, the organization is becoming an independent nonprofit, which should allow it toraise more money to address issues like AI slop. In its latest move, Thomas Dietterich — the chair of arXiv’s computer science section —postedThursdaythat “if a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can’t trust anything in the paper.” That incontrovertible evidence could include things like “hallucinated references” and comments to or from the LLM, Dietterich said. If such evidence is found, a paper’s authors will face “a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted by a reputable peer-reviewed venue.” Note that this isn’t an outright prohibition on using LLMs, but rather an insistence that, as Dietterich put it, authors take “full responsibility” for the content, “irrespective of how the contents are generated.” So if researchers copy-paste “inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content” directly from an LLM, then they’re still responsible for it. Dietterichtold 404 Mediathat this will be a “one-strike” rule, but moderators must flag the issue and section chairs must confirm the evidence before imposing the penalty. Authors will also be able to appeal the decision. Recent peer-reviewed research has found thatfabricated citations are on the risein biomedical research, likely due to LLMs — though to be fair, scientists aren’t the only ones getting caughtusing citations that were made up by AI.