Thelwall, M. (Accepted: 2026) Large Language Models and responsible research evaluation: an extension of the Leiden Manifesto. Scientometrics. ISSN: 0138-9130 (In Press)
Abstract
Research evaluators and scientometricians sometimes promote the message of responsible bibliometrics through initiatives like the Leiden Manifesto, but these do not mention Large Language Models (LLMs). Since there is evidence that LLMs can make quality predictions for journal articles that correlate more strongly with expert judgements than do citation-based indicators in most fields, they may start to supplement or replace citation-based indicators for some applications. This paper compares responsible evaluation principles from the Leiden Manifesto with those necessary for LLMs, finding them all to be still relevant. It also includes a discussion of how they apply to human expert review and the differences between the three approaches. For example, transparency is inherently weak for LLMs, since their decision-making processes are too complex to fully understand (similarly for experts). Conversely, LLMs may be able to address some issues that citation-based indicators can’t, such as by adapting prompts to the goals of an evaluation. Other issues impact differently. For example, LLM evaluations might encourage authors to customize articles to persuade LLMs, whereas bibliometric evaluations might encourage authors to solicit citations. Finally, two additions to the Leiden Manifesto targeting LLM-supported research evaluations are proposed. First, the cost/benefit tradeoff should be considered when deciding which approach to use. In resource-poor contexts or for minor evaluations, it may be irresponsible to implement otherwise fully responsible solutions. Second, LLM evaluations need to comply with national copyright law when processing academic texts: the ability to access a text does not always entail permission to automatically process it.
Metadata
| Item Type: | Article |
|---|---|
| Authors/Creators: |
|
| Copyright, Publisher and Additional Information: | © 2026 Akadémiai Kiadó Zrt. |
| Keywords: | Responsible research evaluation; Large Language Models; Leiden Manifesto |
| Dates: |
|
| Institution: | The University of Sheffield |
| Academic Units: | The University of Sheffield > Faculty of Social Sciences (Sheffield) > School of Information, Journalism and Communication |
| Funding Information: | Funder Grant number UK RESEARCH AND INNOVATION UKRI1079 |
| Date Deposited: | 23 Jan 2026 17:44 |
| Last Modified: | 23 Jan 2026 17:44 |
| Status: | In Press |
| Publisher: | Springer |
| Refereed: | Yes |
| Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:236458 |
Download
Filename: Responsible Uses of Large Language Models V3a by Leiden_R1_preprint.pdf
CORE (COnnecting REpositories)
CORE (COnnecting REpositories)