Arana-Catania, Miguel, Sonee, Amir, Khan, Abdul Manan et al. (11 more authors) (2025) Explainable Reinforcement and Causal Learning for Improving Trust to 6G Stakeholders. IEEE Open Journal of the Communications Society. pp. 4101-4125. ISSN 2644-125X
Abstract
Future telecommunications will increasingly integrate AI capabilities into network infrastructures to deliver seamless and harmonized services closer to end-users. However, this progress also raises significant trust and safety concerns. The machine learning systems orchestrating these advanced services will widely rely on deep reinforcement learning (DRL) to process multi-modal requirements datasets and make semantically modulated decisions, introducing three major challenges: (1) First, we acknowledge that most explainable AI research is stakeholder-agnostic while, in reality, the explanations must cater for diverse telecommunications stakeholders, including network service providers, legal authorities, and end users, each with unique goals and operational practices; (2) Second, DRL lacks prior models or established frameworks to guide the creation of meaningful long-term explanations of the agent's behaviour in a goal-oriented RL task, and we introduce state-of-the-art approaches such as reward machine and sub-goal automata that can be universally represented and easily manipulated by logic programs and verifiably learned by inductive logic programming of answer set programs; (3) Third, most explainability approaches focus on correlation rather than causation, and we emphasise that understanding causal learning can further enhance 6G network optimisation. Together, in our judgement they form crucial enabling technologies for trustworthy services in 6G. This review offers a timely resource for academic researchers and industry practitioners by highlighting the methodological advancements needed for explainable DRL (X-DRL) in 6G. It identifies key stakeholder groups, maps their needs to X-DRL solutions, and presents case studies showcasing practical applications. By identifying and analysing these challenges in the context of 6G case studies, this work aims to inform future research, transform industry practices, and highlight unresolved gaps in this rapidly evolving field.
Metadata
Item Type: | Article |
---|---|
Authors/Creators: |
|
Copyright, Publisher and Additional Information: | Publisher Copyright: © 2020 IEEE. |
Keywords: | 6G,causal learning,explainable AI,reinforcement learning,stakeholders,trust |
Dates: |
|
Institution: | The University of York |
Academic Units: | The University of York > Faculty of Sciences (York) > Computer Science (York) The University of York > Faculty of Sciences (York) > Electronic Engineering (York) |
Depositing User: | Pure (York) |
Date Deposited: | 13 Jun 2025 08:30 |
Last Modified: | 13 Jun 2025 08:30 |
Published Version: | https://doi.org/10.1109/OJCOMS.2025.3563415 |
Status: | Published |
Refereed: | Yes |
Identification Number: | 10.1109/OJCOMS.2025.3563415 |
Related URLs: | |
Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:227805 |
Download
Filename: Explainable_Reinforcement_and_Causal_Learning_for_Improving_Trust_to_6G_Stakeholders.pdf
Description: Explainable Reinforcement and Causal Learning for Improving Trust to 6G Stakeholders
Licence: CC-BY 2.5