Thelwall, M. orcid.org/0000-0001-6065-205X (Accepted: 2025) Can ChatGPT replace citations for quality evaluation of academic articles and journals? Empirical evidence from library and information science. Journal of Documentation. ISSN 0022-0418 (In Press)
Abstract
Purpose: Whilst citation-based indictors have been recommended by librarians to support research quality evaluation, they have many acknowledged limitations. ChatGPT scores have been proposed as an alternative, but their value needs to be assessed.
Design/methodology/approach: Mean normalised ChatGPT scores and citation rates were correlated for articles published 2016-2020 in 24 medium and large Library and Information Science (LIS) journals on the basis that positive values would tend to support the usefulness of both as research quality indicators. Word association thematic analysis was employed to compare high and low scoring articles for both indicators.
Findings: There was a moderately strong article-level Spearman correlation of rho=0.448 (n=5925) between the two indicators. Moreover, there was a very strong journal-level positive correlation rho=0.843 (n=24) between the two indicators, although three journals had plausible reasons for being relatively little cited compared to their ChatGPT scores. ChatGPT seemed to consider research involving libraries, students, and surveys to be lower quality and research involving theory, statistics, experiments and algorithms to be higher quality, on average. Technology adoption research attracted many citations but low ChatGPT scores, and research mentioning novelty and research context was scored highly by ChatGPT but not extensively cited.
Originality: This is the first evidence that ChatGPT gives plausible quality rankings to library and information science articles, despite giving a slightly different perspective on the discipline.
Practical implications: Academic librarians should be aware of this new type of indicator and be prepared to advise researchers about it.
Metadata
Item Type: | Article |
---|---|
Authors/Creators: |
|
Copyright, Publisher and Additional Information: | © 2025, Emerald Publishing Limited. |
Keywords: | ChatGPT; Large Language Models; LLMs; Research Evaluation; Scientometrics; Bibliometrics |
Dates: |
|
Institution: | The University of Sheffield |
Academic Units: | The University of Sheffield > Faculty of Social Sciences (Sheffield) > Information School (Sheffield) |
Funding Information: | Funder Grant number UK RESEARCH AND INNOVATION UKRI1079 |
Depositing User: | Symplectic Sheffield |
Date Deposited: | 17 Jun 2025 15:12 |
Last Modified: | 17 Jun 2025 15:12 |
Status: | In Press |
Publisher: | Emerald |
Refereed: | Yes |
Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:227653 |
Download
Filename: ChatGPT LIS jdoc_R1a.pdf
