Ye, Fei and Bors, Adrian Gheorghe orcid.org/0000-0001-7838-0021 (2021) Learning joint latent representations based on information maximization. Information Sciences. pp. 216-236. ISSN 0020-0255
Abstract
Learning disentangled and interpretable representations is an important aspect of information understanding. In this paper, we propose a novel deep learning model representing both discrete and continuous latent variable spaces which can be used in either supervised or unsupervised learning. The proposed model is trained using an optimization function employing the mutual information maximization criterion. For the unsupervised learning setting we define a lower bound to the mutual information between the joint distribution of the latent variables corresponding to the real data and those generated by the model. The maximization of this lower bound during the training induces the learning of disentangled and interpretable data representations. Such representations can be used for attribute manipulation and image editing tasks.
Metadata
Item Type: | Article |
---|---|
Authors/Creators: |
|
Copyright, Publisher and Additional Information: | © 2021 Elsevier Inc. All rights reserved. This is an author-produced version of the published paper. Uploaded in accordance with the publisher’s self-archiving policy. |
Dates: |
|
Institution: | The University of York |
Academic Units: | The University of York > Faculty of Sciences (York) > Computer Science (York) |
Depositing User: | Pure (York) |
Date Deposited: | 20 Apr 2021 14:30 |
Last Modified: | 18 Dec 2024 00:19 |
Published Version: | https://doi.org/10.1016/j.ins.2021.03.007 |
Status: | Published |
Refereed: | Yes |
Identification Number: | 10.1016/j.ins.2021.03.007 |
Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:173260 |