About 176,000 results
Open links in new tab
  1. Variational autoencoder - Wikipedia

    A variational autoencoder is a generative model with a prior and noise distribution respectively. Usually such models are trained using the expectation-maximization meta-algorithm (e.g. probabilistic PCA , (spike & slab) sparse coding).

  2. In short, a VAE is like an autoencoder, except that it’s also a generative model (de nes a distribution p(x)). Unlike autoregressive models, generation only requires one forward pass. Unlike reversible models, we can t a low-dimensional latent representation. We’ll see we can do interesting things with this...

  3. Variational autoencoders. - Jeremy Jordan

    Mar 19, 2018 · A variational autoencoder (VAE) provides a probabilistic manner for describing an observation in latent space. Thus, rather than building an encoder which outputs a single value to describe each latent state attribute, we'll formulate our encoder to describe a …

  4. A Deep Dive into Variational Autoencoders with PyTorch

    Oct 2, 2023 · In this tutorial, we dive deep into the fascinating world of Variational Autoencoders (VAEs). We’ll start by unraveling the foundational concepts, exploring the roles of the encoder and decoder, and drawing comparisons between the traditional …

  5. Why do we use Gaussian distributions in Variational Autoencoder?

    Apr 11, 2019 · Normal distribution is not the only distribution used for latent variables in VAEs. There are also works using von Mises-Fisher distribution (Hypershperical VAEs [1]), and there are VAEs using Gaussian mixtures, which is useful for unsupervised [2] …

  6. Mar 7, 2024 · Variational AE –completely regularizing the latent space MNIST dataset • Regions outside of the distribution cannot be used for data generation • We must restrict ourselves within the distribution • Learn the distribution directly!

  7. An optimized method for variational autoencoders based on Gaussian

    Oct 1, 2023 · In this paper, the priori variant model of Variational Autoencoders based on the Gaussian Cloud Model is proposed to optimize the sampling method of latent variables, network structure and loss function. First, the Gaussian Cloud Model is used to replace the prior distribution of Variational Autoencoders.

  8. The Tilted Variational Autoencoder: Improving Out-of-Distribution

    Feb 1, 2023 · We empirically demonstrate that this simple change in the prior distribution improves VAE performance on the task of detecting unsupervised out-of-distribution (OOD) samples. We also introduce a new OOD testing procedure, called the Will-It-Move test, where the tilted Gaussian achieves remarkable OOD performance.

  9. Variational Autoencoder - SpringerLink

    Feb 2, 2025 · By sampling from the Gaussian distribution \ (p (\textbf {x}\,|\,\textbf {z})\), for example, a VAE can generate new images of dogs not included in the training data. In autoencoders introduced in Chap. 4, the goal is to find a code that reconstructs the input as closely as possible.

  10. Understanding Variational Autoencoders – Hillary Ngai – ML …

    Mar 10, 2021 · Variational Autoencoders are generative models with an encoder-decoder architecture. Just like a standard autoencoder, VAEs are trained in an unsupervised manner where the reconstruction error between the input x and the reconstructed input x’ is minimized.

  11. Some results have been removed
Refresh