A generic framework for editing and synthesizing multimodal data with relative emotion strength

Chan, JCP, Shum, HPH, Wang, H orcid.org/0000-0002-2281-5679 et al. (3 more authors) (2019) A generic framework for editing and synthesizing multimodal data with relative emotion strength. Computer Animation and Virtual Worlds, 30 (6). e1871. ISSN 1546-4261

Abstract

Metadata

Authors/Creators:
Copyright, Publisher and Additional Information: © 2019 John Wiley & Sons, Ltd. This is the peer reviewed version of the following article: Chan, JCP, Shum, HPH, Wang, H et al. (3 more authors) (2019) A generic framework for editing and synthesizing multimodal data with relative emotion strength. Computer Animation and Virtual Worlds. e1871. ISSN 1546-4261, which has been published in final form at https://doi.org/10.1002/cav.1871. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Use of Self-Archived Versions.
Keywords: data‐driven; emotion motion; facial expression; image editing; motion capture; motion synthesis; relative attribute
Dates:
  • Accepted: 10 January 2019
  • Published (online): 4 February 2019
  • Published: November 2019
Institution: The University of Leeds
Academic Units: The University of Leeds > Faculty of Engineering & Physical Sciences (Leeds) > School of Computing (Leeds)
Depositing User: Symplectic Publications
Date Deposited: 25 Mar 2019 10:31
Last Modified: 04 Feb 2020 01:38
Status: Published
Publisher: Wiley
Identification Number: https://doi.org/10.1002/cav.1871

Export

Statistics