Thelwall, M. orcid.org/0000-0001-6065-205X (2025) Can ChatGPT replace citations for quality evaluation of academic articles and journals? Empirical evidence from library and information science. Journal of Documentation. ISSN: 0022-0418
Abstract
Purpose: Whilst citation-based indictors have been recommended by librarians to support research quality evaluation, they have many acknowledged limitations. ChatGPT scores have been proposed as an alternative, but their value needs to be assessed.
Design/methodology/approach: Mean normalised ChatGPT scores and citation rates were correlated for articles published 2016-2020 in 24 medium and large Library and Information Science (LIS) journals on the basis that positive values would tend to support the usefulness of both as research quality indicators. Word association thematic analysis was employed to compare high and low scoring articles for both indicators.
Findings: There was a moderately strong article-level Spearman correlation of rho=0.448 (n=5925) between the two indicators. Moreover, there was a very strong journal-level positive correlation rho=0.843 (n=24) between the two indicators, although three journals had plausible reasons for being relatively little cited compared to their ChatGPT scores. ChatGPT seemed to consider research involving libraries, students, and surveys to be lower quality and research involving theory, statistics, experiments and algorithms to be higher quality, on average. Technology adoption research attracted many citations but low ChatGPT scores, and research mentioning novelty and research context was scored highly by ChatGPT but not extensively cited.
Originality: This is the first evidence that ChatGPT gives plausible quality rankings to library and information science articles, despite giving a slightly different perspective on the discipline.
Practical implications: Academic librarians should be aware of this new type of indicator and be prepared to advise researchers about it.
Metadata
Item Type: | Article |
---|---|
Authors/Creators: |
|
Copyright, Publisher and Additional Information: | © 2025, The Authors. Except as otherwise noted, this author-accepted version of a journal article published in Journal of Documentation is made available via the University of Sheffield Research Publications and Copyright Policy under the terms of the Creative Commons Attribution 4.0 International License (CC-BY 4.0), which permits unrestricted use, distribution and reproduction in any medium, provided the original work is properly cited. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ |
Keywords: | ChatGPT; Large Language Models; LLMs; Research Evaluation; Scientometrics; Bibliometrics |
Dates: |
|
Institution: | The University of Sheffield |
Academic Units: | The University of Sheffield > Faculty of Social Sciences (Sheffield) > Information School (Sheffield) |
Funding Information: | Funder Grant number UK RESEARCH AND INNOVATION UKRI1079 |
Depositing User: | Symplectic Sheffield |
Date Deposited: | 17 Jun 2025 15:12 |
Last Modified: | 29 Jul 2025 09:40 |
Status: | Published online |
Publisher: | Emerald |
Refereed: | Yes |
Identification Number: | 10.1108/JD-03-2025-0075 |
Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:227653 |