Book Chapter (preprint): Responsible Intelligence in Practice: A Fairness Audit of Open Large Language Models for Library Reference Services

0
78
Libraries & Librarians
Libraries & Librarians

The preprint linked below was recently shared on arXiv.

Title

Responsible Intelligence in Practice: A Fairness Audit of Open Large Language Models for Library Reference Services

Authors

Haining Wang
Indiana University School of Medicine

Jason Clark
Montana State University 

Angelica Peña
San Leandro Public Library

Source

via arXiv

DOI:  10.48550/arXiv.2602.18935

Abstract

As libraries explore large language models (LLMs) as a scalable layer for reference services, a core fairness question follows: can LLM-based services support all patrons fairly, regardless of demographic identity? While LLMs offer great potential for broadening access to information assistance, they may also reproduce societal biases embedded in their training data, potentially undermining libraries’ commitments to impartial service. In this chapter, we apply a systematic evaluation approach that combines diagnostic classification to detect systematic differences with linguistic analysis to interpret their sources. Across three widely used open models (Llama-3.1 8B, Gemma-2 9B, and Ministral 8B), we find no compelling evidence of systematic differentiation by race/ethnicity, and only minor evidence of sex-linked differentiation in one model. We discuss implications for responsible AI adoption in libraries and the importance of ongoing monitoring in aligning LLM-based services with core professional values.

Direct to Abstract + Link to Full Text

Note

Invited chapter for the edited volume Artificial Intelligence and Social Justice Intersections in Library and Information Studies: Challenges and Opportunities (Emerald Group Publishing, in preparation)

The post Book Chapter (preprint): Responsible Intelligence in Practice: A Fairness Audit of Open Large Language Models for Library Reference Services appeared first on Library Journal infoDOCKET.

Read original article: Read More


Discover more from DrWeb's Domain

Subscribe to get the latest posts sent to your email.

Leave Your Comments

This site uses Akismet to reduce spam. Learn how your comment data is processed.