VAE for image generation

In Alternatives: Variational Autoencoders (VAEs) notebook I don’t understand how to generate new images once the model has been trained. Any suggestions?


If I’m understanding your question correctly, here’s the basic idea: With a VAE, you take an input image, and generate a similar-looking output image. Once you’ve trained your model, you can just call vae() again passing in input images and you will get back generated images (which should be similar looking if the VAE is working well).

Hi Wendy, thanks for your answer.

Actually what I understood is that VAE,as an alternative to GANs,can also generate new data by sampling from the learned probability distribution of the input, and that’s what I would like to do

Ah, OK. So the generator part of the VAE is the decoder, so what you’d need to do is use the VAE’s decoder after it’s been trained, and pass it distributions from the latent space.

To get the decoder, you can use vae.decode. You can look at the VAE’s forward method as an example to look at for how to prepare your input for calling the decoder, but basically, you’re going to choose a mean and std, and get a normal distribution from that, and then choose samples. You can use rsample if you’re fine with random samples. Remember that the input to the decoder should be shape (batch_size, z_dim), so you’ll want the mean and std you pass to Normal to have that shape.

ok, it is just what I was looking for! Thank you so much