Clarke, J. orcid.org/0000-0002-1032-6472, Gotoh, Y. and Goetze, S. (2025) Ensembling synchronisation-based and face–voice association paradigms for robust active speaker detection in egocentric recordings. In: Karpov, A. and Gosztolya, G., (eds.) Speech and Computer: 27th International Conference, SPECOM 2025, Szeged, Hungary, October 13-15, 2025, Proceedings, Part II. 27th International Conference, SPECOM 2025, 13-15 Oct 2025, Szeged, Hungary. Lecture Notes in Computer Science, LNAI 16188. Springer Cham, pp. 289-301. ISBN: 9783032079589. ISSN: 0302-9743. EISSN: 1611-3349.
Abstract
Audiovisual active speaker detection (ASD) in egocentric recordings is challenged by frequent occlusions, motion blur, and audio interference, which undermine the discernability of temporal synchrony between lip movement and speech. Traditional synchronisation-based systems perform well under clean conditions but degrade sharply in first-person recordings. Conversely, face-voice association (FVA)-based methods forgo synchronisation modelling in favour of cross-modal biometric matching, exhibiting robustness to transient visual corruption but suffering when overlapping speech or front-end segmentation errors occur. In this paper, a simple yet effective ensemble approach is proposed to fuse synchronisation-dependent and synchronisation-agnostic model outputs via weighted averaging, thereby harnessing complementary cues without introducing complex fusion architectures. A refined preprocessing pipeline for the FVA-based component is also introduced to optimise ensemble integration. Experiments on the Ego4D-AVD validation set demonstrate that the ensemble attains 70.2% and 66.7% mean Average Precision (mAP) with TalkNet and Light-ASD backbones, respectively. A qualitative analysis stratified by face image quality and utterance masking prevalence further substantiates the complementary strengths of each component.
Metadata
| Item Type: | Proceedings Paper |
|---|---|
| Authors/Creators: |
|
| Editors: |
|
| Copyright, Publisher and Additional Information: | © 2025 The Author(s). Except as otherwise noted, this author-accepted version of a paper published in Speech and Computer: 27th International Conference, SPECOM 2025, Szeged, Hungary, October 13-15, 2025, Proceedings, Part II is made available via the University of Sheffield Research Publications and Copyright Policy under the terms of the Creative Commons Attribution 4.0 International License (CC-BY 4.0), which permits unrestricted use, distribution and reproduction in any medium, provided the original work is properly cited. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ |
| Keywords: | Face-voice association; Audiovisual active speaker detection; egocentric recordings |
| Dates: |
|
| Institution: | The University of Sheffield |
| Academic Units: | The University of Sheffield > Faculty of Engineering (Sheffield) > Department of Computer Science (Sheffield) |
| Funding Information: | Funder Grant number META PLATFORM INC UNSPECIFIED Engineering and Physical Sciences Research Council 2588133 Engineering and Physical Sciences Research Council 2638501 |
| Date Deposited: | 15 Aug 2025 07:46 |
| Last Modified: | 22 Oct 2025 15:19 |
| Status: | Published |
| Publisher: | Springer Cham |
| Series Name: | Lecture Notes in Computer Science |
| Refereed: | Yes |
| Identification Number: | 10.1007/978-3-032-07959-6_21 |
| Related URLs: | |
| Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:230329 |

CORE (COnnecting REpositories)
CORE (COnnecting REpositories)