FEAKINS, SHAUN, HABLI, IBRAHIM orcid.org/0000-0003-2736-8238 and MORGAN, PHILLIP DAVID JAMES orcid.org/0000-0002-8797-4216 (2026) Clear, Compelling Arguments: Rethinking the Foundations of Frontier AI Safety Cases. In: The International Association for Safe & Ethical AI Conference (IASEAIʼ26).
Abstract
This paper contributes to the nascent debate around safety cases for frontier AI sys- tems. Safety cases are structured, defensible arguments that a system is acceptably safe to deploy in a given context. Historically, they have been used in safety-critical industries, such as aerospace, nuclear or automotive. As a result, safety cases for frontier AI have risen in prominence, both in the safety policies of leading frontier developers and in international research agendas proposed by leaders in generative AI, such as the Singapore Consensus on Global AI Safety Research Priorities and the International AI Safety Report. This paper appraises this work. We note that research conducted within the alignment community which draws explicitly on lessons from the assurance community has significant limitations. We therefore aim to rethink existing approaches to alignment safety cases. We offer lessons from existing methodologies within safety assurance and outline the limitations involved in the alignment community’s current approach. Building on this foundation, we present a case study for a safety case focused on Deceptive Alignment and CBRN capabilities, drawing on existing, theoretical safety case “sketches” created by the alignment safety case community. Overall, we contribute holistic insights from the field of safety assurance via rigorous theory and methodologies that have been applied in safety-critical contexts. We do so in order to create a better foundational framework for robust, defensible and useful safety case methodologies which can help to assure the safety of frontier AI systems.
Metadata
| Item Type: | Proceedings Paper |
|---|---|
| Authors/Creators: |
|
| Copyright, Publisher and Additional Information: | This is an author-produced version of the published paper. Uploaded in accordance with the University’s Research Publications and Open Access policy. |
| Keywords: | AI,AI safety,AI Safety Assurance,Safety Cases,LLMs,General Purpose AI |
| Dates: |
|
| Institution: | The University of York |
| Academic Units: | The University of York > Faculty of Sciences (York) > Computer Science (York) The University of York > Faculty of Social Sciences (York) > The York Law School |
| Date Deposited: | 02 Mar 2026 12:00 |
| Last Modified: | 04 Mar 2026 00:13 |
| Status: | Published |
| Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:238521 |
Download
Filename: IASEAI_Paper_Camera_Ready_AI_Safety_Cases_.pdf
Description: IASEAI_Paper_Camera_Ready_AI_Safety_Cases
Licence: CC-BY 2.5

CORE (COnnecting REpositories)
CORE (COnnecting REpositories)