From Nature:
PsyArXiv is just one of the many preprint servers — and journals — that are grappling with suspicious submissions. Some papers bear the fingerprints of paper mills, which are services that produce scientific papers on demand. Others show evidence of content written by AI systems, such as fake references, which can be a sign of an AI ‘hallucination’.
Such content poses a conundrum for preprint services. Many are non-profit organizations devoted to making it easier for scientists to publish their work, and screening for low-quality content demands resources and can slow processing of submissions. Such screening also raises questions about which manuscripts to allow. And this influx of dodgy content has its own risks.
[Clip]
The preprint services approached by Nature said that a relatively small proportion of their submissions bear signs of being generated by a large language model (LLM) such as the one that drives OpenAI’s ChatGPT. The operators of the preprint server arXiv, for example, estimate that roughly 2% of their submissions are rejected for being products of AI, paper mills or both.
Richard Sever, head of openRxiv, which operates life-sciences preprint server bioRxiv and biomedical server medRxiv, based in New York City, says that the two combined turn away more than ten manuscripts per day that seem formulaic, and might have been AI-generated. The services receive roughly 7,000 submissions per month.
Learn More, Read the Complete Article
Read original article: Read More
Discover more from DrWeb's Domain
Subscribe to get the latest posts sent to your email.
