Buehler, P., Everingham, M., Huttenlocher, D.P. and Zisserman, A. (2008) Long term arm and hand tracking for continuous sign language TV broadcasts. In: Everingham, M., Needham, C.J. and Fraile, R., (eds.) Proceedings of the 19th British Machine Vision Conference. BMVC 2008, 1st - 4th September 2008, University of Leeds, UK. BMVA Press , pp. 1105-1114. ISBN 978-1-901725-36-0Full text available as:
Available under licence : See the attached licence file.
The goal of this work is to detect hand and arm positions over continuous sign language video sequences of more than one hour in length. We cast the problem as inference in a generative model of the image. Under this model, limb detection is expensive due to the very large number of possible configurations each part can assume. We make the following contributions to reduce this cost: (i) using efficient sampling from a pictorial structure proposal distribution to obtain reasonable configurations; (ii) identifying a large set of frames where correct configurations can be inferred, and using temporal tracking elsewhere. Results are reported for signing footage with changing background, challenging image conditions, and different signers; and we show that the method is able to identify the true arm and hand locations. The results exceed the state-of-the-art for the length and stability of continuous limb tracking.
|Item Type:||Proceedings Paper|
|Copyright, Publisher and Additional Information:||Additional data accompanying this paper available from http://www.robots.ox.ac.uk/~vgg/research/sign_language/index.html|
|Institution:||The University of Leeds|
|Academic Units:||The University of Leeds > Faculty of Engineering (Leeds) > School of Computing (Leeds)|
|Depositing User:||Mrs Irene Rudling|
|Date Deposited:||20 Feb 2009 17:15|
|Last Modified:||08 Feb 2013 17:06|