Thelwall, M. and Yaghi, A. (2025) Evaluating the predictive capacity of ChatGPT for academic peer review outcomes across multiple platforms. Scientometrics. ISSN 0138-9130
Abstract
Academic peer review is at the heart of scientific quality control, yet the process is slow and time-consuming. Technology that can predict peer review outcomes may help with this, for example by fast-tracking desk rejection decisions. While previous studies have demonstrated that Large Language Models (LLMs) can predict peer review outcomes to some extent, this paper introduces two new contexts and employs a more robust method—averaging multiple ChatGPT scores. Averaging 30 ChatGPT predictions, based on reviewer guidelines and using only the submitted titles and abstracts failed to predict peer review outcomes for F1000Research (Spearman’s rho=0.00). However, it produced mostly weak positive correlations with the quality dimensions of SciPost Physics (rho=0.25 for validity, rho=0.25 for originality, rho = 0.20 for significance, and rho = 0.08 for clarity) and a moderate positive correlation for papers from the International Conference on Learning Representations (ICLR) (rho=0.38). Including article full texts increased the correlation for ICLR (rho=0.46) and slightly improved it for F1000Research (rho=0.09), with variable effects on the four quality dimension correlations for SciPost LaTeX files. The use of simple chain-of-thought system prompts slightly increased the correlation for F1000Research (rho=0.10), marginally reduced it for ICLR (rho=0.37), and further decreased it for SciPost Physics (rho=0.16 for validity, rho=0.18 for originality, rho=0.18 for significance, and rho=0.05 for clarity). Overall, the results suggest that in some contexts, ChatGPT can produce weak pre-publication quality predictions. However, their effectiveness and the optimal strategies for employing them vary considerably across different platforms, journals, and conferences. Finally, the most suitable inputs for ChatGPT appear to differ depending on the platform.
Metadata
Item Type: | Article |
---|---|
Authors/Creators: |
|
Copyright, Publisher and Additional Information: | © The Author(s) 2025. Open Access: This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. |
Keywords: | ChatGPT; Academic peer review; Journal review; Research evaluation |
Dates: |
|
Institution: | The University of Sheffield |
Academic Units: | The University of Sheffield > Faculty of Social Sciences (Sheffield) > Information School (Sheffield) |
Funding Information: | Funder Grant number UK RESEARCH AND INNOVATION APP43146 |
Depositing User: | Symplectic Sheffield |
Date Deposited: | 18 Mar 2025 17:09 |
Last Modified: | 25 Mar 2025 11:35 |
Published Version: | https://link.springer.com/article/10.1007/s11192-0... |
Status: | Published online |
Publisher: | Springer |
Refereed: | Yes |
Identification Number: | 10.1007/s11192-025-05287-1 |
Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:224435 |