Get essential preprints that you can implement or cite in a minute.

Organize the papers you find in one place to build your own library,

share with your community, and find out what other researchers have read.

Join 20,000+ scientists from over 80 countries to research 10x smarter.

Mixed-curvature Variational Autoencoders

Feb. 13, 2020

Machine Learning

Euclidean geometry has historically been the typical “workhorse” for machine learning applications due to its power and simplicity. However, it has recently been shown that geometric spaces with constant non-zero curvature improve representations and performance on a variety of data types and downstream tasks. Consequently, generative models like Variational Autoencoders (VAEs) have been successfully generalized to elliptical and hyperbolic latent spaces. While these approaches work well on data with particular kinds of biases e.g. tree-like data for a hyperbolic VAE, there exists no generic approach unifying and leveraging all three models. We develop a Mixed-curvature Variational Autoencoder, an efficient way to train a VAE whose latent space is a product of constant curvature Riemannian manifolds, where the per-component curvature is fixed or learnable. This generalizes the Euclidean VAE to curved latent spaces and recovers it when curvatures of all latent space components go to 0.

__show more__

Source arXiv: http://arxiv.org/abs/1911.08411v2