Conference Paper: “Hallucinations in Scholarly LLMs: A Conceptual Overview and Practical Implications”

0
20
DWD Featured Image Feb 9, 2026
DWD Featured Image Feb 9, 2026

The paper linked below appears in the Proceedings of the 2nd AAAI (Association for the Advancement of Artificial Intelligence).

Bridge on Artificial Intelligence for Scholarly Communication.

Title

Hallucinations in Scholarly LLMs: A Conceptual Overview and Practical Implications

Authors

Naveen Lamba
Sharda University, India

Sanju Tiwari
Sharda University, India

Manas Gaur
University of Maryland, Baltimore County

Source

The Second Bridge on Artificial Intelligence for Scholarly Communication (AAAI-26) (Open Conference Proceedings)

DOI: 10.52825/ocp.v8i.3175

Abstract

The issue of large language models (LLMs) is gradually infiltrating the academic workflow, but it also presents one significant problem: hallucination. The hallucinations involve invented research results, ideas of fabricated reference, and misinterpreted inferences that destroy the credibility and dependability of scholarly writing. In the present paper, the concept of hallucinations as the aspect of scholarly communication is discussed, the major types of hallucinations are revealed, and the causes along with effects of hallucinations are discussed. It also examines pragmatic mitigation measures, such as retrieval-augmented generation (RAG) of factual grounding, citation-verification, and neurosymbolic strategies of structured fact-checking. The paper additionally emphasizes the significance of human-AI partnership in the process of creating scholarly tools to make the use of AI in research responsible and verifiable.The paper seeks to create awareness and offer guidance to the creation of reliableAI systems to be used in scholarly contexts by synthesizing risks, opportunities, and available mitigation measures to such systems. Instead of presenting a comprehensive technical structure, the work provides an overview of the conceptual description which may be used to design more reliable, transparent, and fact-driven AI-assisted research tools.

Source: 10.52825/ocp.v8i.3175

Direct to Full Text Article
10 pages; PDF.

On a Related Note (Also in the Proceedings)

Synergistic AI Agents: Integrating Knowledge Graphs and Large Language Models for Scholarly Communication

Agentic AI is and emerging field of artificial intelligence and it has great impacton scholarly research. Agentic AI helps to handle large volume of information fromvast corpora. Currently the Agentic AI systems depends on Large Language Models(LLM) for the tasks of information retrieval and reasoning. LLMs are very effective atNatural Language Understanding and the iterative reasoning. However, there exist some inherent limitations for LLMs, which pose challenges for Agentic AI. Provenance tracking, reasoning challenges, temporal staleness and context dilution are some examples.Incorporating Knowledge Graphs (KG) along with LLMs can mitigate these challenges, and can support deeps search in Agentic AI.In this work, we are exploring the aspects of how KG is well suited for addressing these challenges, and how KG can complement LLMs in Agentic AI for scholarly research. Furthermore, we investigate the problem of frequency bias inherent in LLMs. Frequency bias distorts the outputs in LLMs by biasing towards the most frequent inputs.We examine how a KG integration can counteract this problem. Overall, through this work we aim to highlight the potential of Knowledge Graphs for Agentic AI in scholarly communication.

Direct to Complete Proceedings

Read original article: Read More


Discover more from DrWeb's Domain

Subscribe to get the latest posts sent to your email.

Leave Your Comments

This site uses Akismet to reduce spam. Learn how your comment data is processed.