Accuracy paradox: addressing epistemic, manipulative, and societal risks of hallucination in AI governance

Li, Z. orcid.org/0000-0003-2109-9196, Yi, W. and Chen, J. orcid.org/0000-0002-1970-6762 (2026) Accuracy paradox: addressing epistemic, manipulative, and societal risks of hallucination in AI governance. Computer Law & Security Review, 61. 106311. ISSN: 2212-473X

Abstract

Metadata

Item Type: Article
Authors/Creators:
Copyright, Publisher and Additional Information:

© 2026 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).

Keywords: Accuracy Paradox; Hallucination; Artificial Intelligence; Large Language Models; AI Regulation; Data Protection; AI Governance
Dates:
  • Published (online): 9 April 2026
  • Published: July 2026
Institution: The University of Sheffield
Academic Units: The University of Sheffield > Faculty of Arts and Humanities (Sheffield) > School of Law
Funding Information:
Funder
Grant number
RESPONSIBLE AI UK
EP/Y009800/1
ECONOMIC & SOCIAL RESEARCH COUNCIL
ES/Y00020X/1
Date Deposited: 13 Apr 2026 12:41
Last Modified: 13 Apr 2026 12:41
Status: Published
Publisher: Elsevier BV
Refereed: Yes
Identification Number: 10.1016/j.clsr.2026.106311
Open Archives Initiative ID (OAI ID):

Export

Statistics