Habli, Ibrahim orcid.org/0000-0003-2736-8238, Hawkins, Richard orcid.org/0000-0001-7347-3413, Paterson, Colin orcid.org/0000-0002-6678-3752 et al. (4 more authors) (2025) The BIG Argument for AI Safety Cases. [Preprint]
Abstract
We present our Balanced, Integrated and Grounded (BIG) argument for assuring the safety of AI systems. The BIG argument adopts a whole-system approach to constructing a safety case for AI systems of varying capability, autonomy and criticality. Whether the AI capability is narrow and constrained or general-purpose and powered by a frontier or foundational model, the BIG argument insists on a meaningful treatment of safety. It respects long-established safety assurance norms such as sensitivity to context, traceability and risk proportionality. Further, it places a particular focus on the novel hazardous behaviours emerging from the advanced capabilities of frontier AI models and the open contexts in which they are rapidly being deployed. These complex issues are considered within a broader AI safety case that approaches assurance from both technical and sociotechnical perspectives. Examples illustrating the use of the BIG argument are provided throughout the paper.
Metadata
Item Type: | Preprint |
---|---|
Authors/Creators: |
|
Keywords: | AI safety,Frontier AI,Safety cases,Assurance |
Dates: |
|
Institution: | The University of York |
Academic Units: | The University of York > Faculty of Sciences (York) > Computer Science (York) |
Depositing User: | Pure (York) |
Date Deposited: | 14 Mar 2025 12:00 |
Last Modified: | 30 Mar 2025 00:13 |
Published Version: | https://doi.org/10.48550/arXiv.2503.11705 |
Status: | Published |
Publisher: | Arxiv (Cornell University) |
Identification Number: | 10.48550/arXiv.2503.11705 |
Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:224441 |