Kaas, Marten Henry Leon and Habli, Ibrahim orcid.org/0000-0003-2736-8238 (2024) Assuring AI safety: fallible knowledge and the Gricean maxims. AI and Ethics. ISSN 2730-5961
Abstract
In this paper we argue that safety claims, when justified by a safety case, are descriptive fallible knowledge claims. Even if the aim of a safety case was to justify infallible knowledge about the safety of a system, such infallible safety knowledge is impossible to attain in the case of AI-enabled systems. By their nature AI-enabled systems preclude the possibility of obtaining infallible knowledge concerning their safety or lack thereof. We suggest that one can communicate knowledge of an AI-enabled system’s safety by structuring their exchange according to Paul Grice’s Cooperative Principle which can be achieved via adherence to the Gricean maxims of communication. Furthermore, these same maxims can be used to evaluate the calibre of the exchange, with the aim being to ensure that communicating knowledge about an AI-enabled system’s safety is of the highest calibre, in short, that the communication is relevant, of sufficient quantity and quality, and communicated perspicuously. The high calibre communication of safety claims to an epistemically diverse group of stakeholders is vitally important given the increasingly participatory nature of AI-enabled system design, development and assessment.
Metadata
Item Type: | Article |
---|---|
Authors/Creators: |
|
Copyright, Publisher and Additional Information: | © The Author(s) 2024 |
Dates: |
|
Institution: | The University of York |
Academic Units: | The University of York > Faculty of Sciences (York) > Computer Science (York) |
Depositing User: | Pure (York) |
Date Deposited: | 29 May 2024 10:40 |
Last Modified: | 07 Mar 2025 00:10 |
Published Version: | https://doi.org/10.1007/s43681-024-00490-x |
Status: | Published online |
Refereed: | Yes |
Identification Number: | 10.1007/s43681-024-00490-x |
Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:212802 |