This is the latest version of this eprint.
Ma, Y., Yuan, R., Li, Y. et al. (12 more authors) (2023) On the effectiveness of speech self-supervised learning for music. In: Sarti, A., Antonacci, F., Sandler, M., Bestagini, P., Dixon, S., Liang, B., Richard, G. and Pauwels, J., (eds.) ISMIR 2023: 24th International Society for Music Information Retrieval Conference proceedings. 24th International Society for Music Information Retrieval Conference (ISMIR 2023), 05-09 Nov 2023, Milan, Italy. International Society for Music Information Retrieval (ISMIR) , pp. 457-465. ISBN 978-1-7327299-3-3
Abstract
Self-supervised learning (SSL) has shown promising results in various speech and natural language processing applications. However, its efficacy in music information retrieval (MIR) still remains largely unexplored. While previous SSL models pre-trained on music recordings may have been mostly closed-sourced, recent models such as wav2vec2.0 have shown promise. Nevertheless, research exploring the effectiveness of applying speech SSL models to music recordings has been limited. We explore the music adaption of SSL with two distinctive speech-related models, data2vec1.0 and Hubert, and refer to them as music2vec and musicHuBERT, respectively. We train 12 SSL models with 95M parameters under various pre-training configurations and systematically evaluate the MIR task performances with 13 different MIR tasks. Our findings suggest that training with music data can generally improve performance on MIR tasks, even when models are trained using paradigms designed for speech. However, we identify the limitations of such existing speech-oriented designs, especially in modelling polyphonic information. Based on the experimental results, empirical suggestions are also given for designing future musical SSL strategies and paradigms.
Metadata
Item Type: | Proceedings Paper |
---|---|
Authors/Creators: |
|
Editors: |
|
Copyright, Publisher and Additional Information: | © Y. Ma, R. Yuan, Y. Li, G. Zhang et al.. Licensed under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/ ). Attribution: Y. Ma, R. Yuan, Y. Li, G. Zhang et al., “On the Effectiveness of Speech Self-Supervised Learning for Music”, in Proc. of the 24th Int. Society for Music Information Retrieval Conf., Milan, Italy, 2023. |
Dates: |
|
Institution: | The University of Sheffield |
Academic Units: | The University of Sheffield > Faculty of Engineering (Sheffield) > Department of Computer Science (Sheffield) |
Depositing User: | Symplectic Sheffield |
Date Deposited: | 07 Jun 2024 13:47 |
Last Modified: | 07 Jun 2024 13:47 |
Published Version: | https://ismir2023program.ismir.net/poster_154.html |
Status: | Published |
Publisher: | International Society for Music Information Retrieval (ISMIR) |
Refereed: | Yes |
Related URLs: | |
Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:213201 |
Available Versions of this Item
-
On the effectiveness of speech self-supervised learning for music. (deposited 06 Jun 2024 16:25)
- On the effectiveness of speech self-supervised learning for music. (deposited 07 Jun 2024 13:47) [Currently Displayed]