LLM-based adversarial persuasion attacks on fact-checking systems.

This is a preprint and may not have undergone formal peer review

Leite, J.A., Razuvayevskaya, O., Bontcheva, K. orcid.org/0000-0001-6152-9600 et al. (1 more author) (Submitted: 2026) LLM-based adversarial persuasion attacks on fact-checking systems. [Preprint - arXiv] (Submitted)

Abstract

Metadata

Item Type: Preprint
Authors/Creators:
Copyright, Publisher and Additional Information:

© 2026 The Author(s). For reuse permissions, please contact the Author(s).

Dates:
  • Submitted: 23 January 2026
Institution: The University of Sheffield
Academic Units: The University of Sheffield > Faculty of Engineering (Sheffield) > Department of Computer Science (Sheffield)
Funding Information:
Funder
Grant number
ENGINEERING AND PHYSICAL SCIENCE RESEARCH COUNCIL / EPSRC
UKRI3352
Date Deposited: 09 Mar 2026 15:53
Last Modified: 09 Mar 2026 15:54
Status: Submitted
Identification Number: 10.48550/arXiv.2601.16890
Open Archives Initiative ID (OAI ID):

Export

Statistics