Townsend, Bev orcid.org/0000-0002-8486-6041, Hodge, Victoria J. orcid.org/0000-0002-2469-0224, Richardson, Hannah et al. (2 more authors) (2025) Cautious Optimism: Public Voices on Medical AI and Sociotechnical Harm. Frontiers in Digital Health. 1625747. ISSN: 2673-253X
Abstract
Background: Medical-purpose software and Artificial Intelligence (‘AI’)-enabled technologies (‘medical AI’) raise important social, ethical, cultural, and regulatory challenges. To elucidate these important challenges, we present the findings of a qualitative study undertaken to elicit public perspectives and expectations around medical AI adoption and related sociotechnical harm. Sociotechnical harm refers to any adverse implications including, but not limited to, physical, psychological, social, and cultural impacts experienced by a person or broader society as a result of medical AI adoption. The work is intended to guide effective policy interventions to address, prioritise, and mitigate such harm. Methods: Using a qualitative design approach, twenty interviews and/or long-form questionnaires were completed between September and November 2024 with UK participants to explore their perspectives, expectations, and concerns around medical AI adoption and related sociotechnical harm. An emphasis was placed on diversity and inclusion, with study participants drawn from racially, ethnically, and linguistically diverse groups and from self-identified minority groups. A thematic analysis of interview transcripts and questionnaire responses was conducted to identify general medical AI perception and sociotechnical harm. Results: Our findings demonstrate that while participants are cautiously optimistic about medical AI adoption, all participants expressed concern about matters related to sociotechnical harm. This included potential harm to human autonomy, alienation and a reduction in standards of care, the lack of value alignment and integration, epistemic injustice, bias and discrimination, and issues around access and equity, explainability and transparency, and data privacy and data-related harm. While responsibility was seen to be shared, participants located responsibility for addressing sociotechnical harm primarily with the regulatory authorities. An identified concern was risk of exclusion and inequitable access on account of practical barriers such as physical limitations, technical competency, language barriers, or financial constraints. Conclusion: We conclude that medical AI adoption can be better supported through identifying, prioritising, and addressing sociotechnical harm including the development of clear impact and mitigation practices, embedding pro-social values within the system, and through effective policy guidance intervention.
Metadata
Item Type: | Article |
---|---|
Authors/Creators: |
|
Keywords: | Public perspectives,medical devices regulation,AI-enabled medical devices,socio-ethical and cultural requirement,sociotechnical harm,healthcare,medical AI |
Dates: |
|
Institution: | The University of York |
Academic Units: | The University of York > Faculty of Social Sciences (York) > The York Law School The University of York > Faculty of Sciences (York) > Computer Science (York) |
Depositing User: | Pure (York) |
Date Deposited: | 24 Sep 2025 10:50 |
Last Modified: | 24 Sep 2025 10:50 |
Published Version: | https://doi.org/10.3389/fdgth.2025.1625747 |
Status: | Published |
Refereed: | Yes |
Identification Number: | 10.3389/fdgth.2025.1625747 |
Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:232161 |