Ellis, Zachary, Joselowitz, Jared, Deo, Yash et al. (7 more authors) (2025) WER is Unaware:Assessing How ASR Errors Distort Clinical Understanding in Patient Facing Dialogue. [Preprint]
Abstract
As Automatic Speech Recognition (ASR) is increasingly deployed in clinical dialogue, standard evaluations still rely heavily on Word Error Rate (WER). This paper challenges that standard, investigating whether WER or other common metrics correlate with the clinical impact of transcription errors. We establish a gold-standard benchmark by having expert clinicians compare ground-truth utterances to their ASR-generated counterparts, labeling the clinical impact of any discrepancies found in two distinct doctor-patient dialogue datasets. Our analysis reveals that WER and a comprehensive suite of existing metrics correlate poorly with the clinician-assigned risk labels (No, Minimal, or Significant Impact). To bridge this evaluation gap, we introduce an LLM-as-a-Judge, programmatically optimized using GEPA through DSPy to replicate expert clinical assessment. The optimized judge (Gemini-2.5-Pro) achieves human-comparable performance, obtaining 90% accuracy and a strong Cohen's $κ$ of 0.816. This work provides a validated, automated framework for moving ASR evaluation beyond simple textual fidelity to a necessary, scalable assessment of safety in clinical dialogue.
Metadata
| Item Type: | Preprint |
|---|---|
| Authors/Creators: |
|
| Keywords: | cs.CL,cs.AI |
| Dates: |
|
| Institution: | The University of York |
| Academic Units: | The University of York > Faculty of Sciences (York) > Computer Science (York) |
| Date Deposited: | 26 Nov 2025 10:40 |
| Last Modified: | 26 Nov 2025 10:40 |
| Published Version: | https://doi.org/10.48550/arXiv.2511.16544 |
| Status: | Published |
| Publisher: | arXiv |
| Identification Number: | 10.48550/arXiv.2511.16544 |
| Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:234871 |

CORE (COnnecting REpositories)
CORE (COnnecting REpositories)