Bannister, S. orcid.org/0000-0003-4905-0511, Firth, J. orcid.org/0000-0002-7825-0945, Roa-Dabike, G. orcid.org/0000-0001-7839-8061 et al. (8 more authors) (2026) The first cadenza challenge: perceptual evaluation of machine learning systems to improve audio quality of popular music for those with hearing loss. Trends in Hearing, 30. ISSN: 2331-2165
Abstract
Music is central to many people's lives, and hearing loss (HL) is often a barrier to musical engagement. Hearing aids (HAs) help, but their efficacy in improving speech does not consistently translate to music. This research evaluated systems submitted to the 1st Cadenza Machine Learning Challenge, where entrants aimed to improve music audio quality for HA users through source separation and remixing. The HA users (N = 53, ranging from “mild” to “moderately severe” HL) assessed eight challenge systems (including one baseline using the HDemucs source separation algorithm, remixing to original mixes of music samples, and applying National Acoustic Laboratories Revised amplification) and rated 200 music samples processed for their HL. Participants rated samples on basic audio quality, clarity, harshness, distortion, frequency balance, and liking. Results suggest no entrant system surpassed the baseline for audio quality, although differences emerged in system efficacy across HL severities. Clarity and distortion ratings were most predictive of audio quality. Finally, some systems produced signals with higher objective loudness, spectral flux and clipping with increasing HL severity; these received lower audio quality ratings by listeners with moderately severe HL. Findings highlight how music enhancement requires varied solutions and tests across a range of HL severities. This challenge provided a first application of source separation to music listening with HL. However, state-of-the-art source separation algorithms limited the diversity of entrant solutions, resulting in no improvements over the baseline; to promote development of innovative processing strategies, future work should increase complexity of music listening scenarios to be addressed through source separation.
Metadata
| Item Type: | Article |
|---|---|
| Authors/Creators: |
|
| Copyright, Publisher and Additional Information: | © The Author(s) 2026. This article is distributed under the terms of the Creative Commons Attribution 4.0 License (https://creativecommons.org/licenses/by/4.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage). |
| Keywords: | music; hearing loss; hearing aids; machine learning; signal processing; audio quality; source separation |
| Dates: |
|
| Institution: | The University of Sheffield |
| Academic Units: | The University of Sheffield > Faculty of Engineering (Sheffield) > Department of Computer Science (Sheffield) |
| Date Deposited: | 03 Feb 2026 15:41 |
| Last Modified: | 03 Feb 2026 15:41 |
| Status: | Published |
| Publisher: | SAGE Publications |
| Refereed: | Yes |
| Identification Number: | 10.1177/23312165251408761 |
| Related URLs: | |
| Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:237414 |

CORE (COnnecting REpositories)
CORE (COnnecting REpositories)