Ye, Fei and Bors, Adrian Gheorghe orcid.org/0000-0001-7838-0021 (2020) Mixtures of Variational Autoencoders. In: Proc. Int. Conf. on Image Processing, Theory, Tools and Applications (IPTA). IEEE , Paris, France
Abstract
In this paper, we develop a new deep mixture learning framework, aiming to learn underlying complex data structures. Each component in the mixture model is implemented using a Variational Autoencoder (VAE). VAE is a well known deep learning model which models a latent space data representation on a variational manifold. The mixing parameters are estimated from a Dirichlet distribution modelled by each encoder. In order to train this mixture model, named M-VAE, we derive a mixture evidence lower bound on the sample log-likelihood, which is optimized in order to jointly estimate all mixture components. We further propose to use the d-variables Hilbert-Schmidt Independence Criterion (dHSIC) as a regularization criterion in order to enforce the independence among the encoders’ distributions. This criterion encourages the proposed mixture components to learn different data distributions and represent them in the latent space. During the experiments with the proposed M-VAE model we observe that it can be used for discovering disentangled data representations which can not be achieved with a single VAE.
Metadata
Item Type: | Proceedings Paper |
---|---|
Authors/Creators: |
|
Dates: |
|
Institution: | The University of York |
Academic Units: | The University of York > Faculty of Sciences (York) > Computer Science (York) |
Depositing User: | Pure (York) |
Date Deposited: | 30 Nov 2020 12:30 |
Last Modified: | 18 Dec 2024 00:39 |
Published Version: | https://doi.org/10.1109/IPTA50016.2020.9286619 |
Status: | Published |
Publisher: | IEEE |
Identification Number: | 10.1109/IPTA50016.2020.9286619 |
Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:168548 |
Download
Filename: IPTA2020a.pdf
Description: Mixtures of Variational Autoencoders