Direct Speech Reconstruction From Articulatory Sensor Data by Machine Learning

Gonzalez, J.A. orcid.org/0000-0002-5531-8994, Cheah, L.A., Gomez, A.M. et al. (5 more authors) (2017) Direct Speech Reconstruction From Articulatory Sensor Data by Machine Learning. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 25 (12). pp. 2362-2374. ISSN 2329-9290

Abstract

Metadata

Authors/Creators:
Copyright, Publisher and Additional Information: © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works. Reproduced in accordance with the publisher's self-archiving policy.
Keywords: Silent speech interfaces; articulatory-to-acoustic mapping; speech rehabilitation; permanent magnet articulography; speech synthesis
Dates:
  • Accepted: 25 September 2017
  • Published: 23 November 2017
Institution: The University of Sheffield
Academic Units: The University of Sheffield > Faculty of Engineering (Sheffield) > Department of Computer Science (Sheffield)
Funding Information:
FunderGrant number
UNIVERSITY OF HULLCN/64(116)
NATIONAL INSTITUTE FOR HEALTH RESEARCHII-AR-0410-12027
WHITE ROSE UNIVERSITY CONSORTIUMNONE
Depositing User: Symplectic Sheffield
Date Deposited: 19 Mar 2018 14:57
Last Modified: 28 Jun 2018 14:55
Published Version: https://doi.org/10.1109/TASLP.2017.2757263
Status: Published
Publisher: IEEE
Refereed: Yes
Identification Number: https://doi.org/10.1109/TASLP.2017.2757263
Related URLs:

Export

Statistics