This is the latest version of this eprint.
Ford, J., Pevy, N., Grunewald, R. et al. (2 more authors) (2025) Can artificial intelligence diagnose seizures based on patients' descriptions? A study of GPT-4. Epilepsia. ISSN 0013-9580
Abstract
Objective
Generalist large language models (LLMs) have shown diagnostic potential in various medical contexts but have not been explored extensively in relation to epilepsy. This paper aims to test the performance of an LLM (OpenAI's GPT-4) on the differential diagnosis of epileptic and functional/dissociative seizures (FDS) based on patients' descriptions.
Methods
GPT-4 was asked to diagnose 41 cases of epilepsy (n = 16) or FDS (n = 25) based on transcripts of patients describing their symptoms (median word count = 399). It was first asked to perform this task without additional training examples (zero-shot) before being asked to perform it having been given one, two, and three examples of each condition (one-, two, and three-shot). As a benchmark, three experienced neurologists performed this task without access to any additional clinical or demographic information (e.g., age, gender, socioeconomic status).
Results
In the zero-shot condition, GPT-4's average balanced accuracy was 57% (κ = .15). Balanced accuracy improved in the one-shot condition (64%, κ = .27), but did not improve any further in the two-shot (62%, κ = .24) and three-shot (62%, κ = .23) conditions. Performance in all four conditions was worse than the mean balanced accuracy of the experienced neurologists (71%, κ = .42). However, in the subset of 18 cases that all three neurologists had “diagnosed” correctly (median word count = 684), GPT-4's balanced accuracy was 81% (κ = .66).
Significance
Although its “raw” performance was poor, GPT-4 showed noticeable improvement having been given just one example of a patient describing epilepsy and FDS. Giving two and three examples did not further improve performance, but the finding that GPT-4 did much better in those cases correctly diagnosed by all three neurologists suggests that providing more extensive clinical data and more elaborate approaches (e.g., more refined prompt engineering, fine-tuning, or retrieval augmented generation) could unlock the full diagnostic potential of LLMs.
Metadata
Item Type: | Article |
---|---|
Authors/Creators: |
|
Copyright, Publisher and Additional Information: | © 2025 The Author(s). Epilepsia published by Wiley Periodicals LLC on behalf of International League Against Epilepsy. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. http://creativecommons.org/licenses/by/4.0/ |
Keywords: | artificial intelligence; automated diagnosis; large language model; epilepsy; functional/dissociative seizures |
Dates: |
|
Institution: | The University of Sheffield |
Academic Units: | The University of Sheffield > Faculty of Medicine, Dentistry and Health (Sheffield) > School of Medicine and Population Health |
Depositing User: | Symplectic Sheffield |
Date Deposited: | 03 Mar 2025 16:30 |
Last Modified: | 03 Mar 2025 16:36 |
Status: | Published online |
Publisher: | Wiley |
Refereed: | Yes |
Identification Number: | 10.1111/epi.18322 |
Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:223990 |
Available Versions of this Item
-
Can artificial intelligence diagnose seizures based on patients’ descriptions? A study of GPT-4. (deposited 07 Mar 2025 15:43)
- Can artificial intelligence diagnose seizures based on patients' descriptions? A study of GPT-4. (deposited 03 Mar 2025 16:30) [Currently Displayed]