Direct Speech Reconstruction From Articulatory Sensor Data by Machine Learning

Gonzalez, J.A. orcid.org/0000-0002-5531-8994, Cheah, L.A., Gomez, A.M. et al. (5 more authors) (2017) Direct Speech Reconstruction From Articulatory Sensor Data by Machine Learning. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 25 (12). pp. 2362-2374. ISSN 2329-9290

Abstract

Metadata

Item Type: Article
Authors/Creators:
Copyright, Publisher and Additional Information:

© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works. Reproduced in accordance with the publisher's self-archiving policy.

Keywords: Silent speech interfaces; articulatory-to-acoustic mapping; speech rehabilitation; permanent magnet articulography; speech synthesis
Dates:
  • Published: 23 November 2017
  • Accepted: 25 September 2017
Institution: The University of Sheffield
Academic Units: The University of Sheffield > Faculty of Engineering (Sheffield) > Department of Computer Science (Sheffield)
Funding Information:
Funder
Grant number
UNIVERSITY OF HULL
CN/64(116)
NATIONAL INSTITUTE FOR HEALTH RESEARCH
II-AR-0410-12027
WHITE ROSE UNIVERSITY CONSORTIUM
NONE
Depositing User: Symplectic Sheffield
Date Deposited: 19 Mar 2018 14:57
Last Modified: 28 Jun 2018 14:55
Published Version: https://doi.org/10.1109/TASLP.2017.2757263
Status: Published
Publisher: IEEE
Refereed: Yes
Identification Number: 10.1109/TASLP.2017.2757263
Related URLs:
Open Archives Initiative ID (OAI ID):

Export

Statistics