Variational Autoencoders – Removing Noise with Variational Autoencoders

autoencodersdeep learningmachine learning

I have one question that is related to variational autoencoders: can they be used as a denosing algorithm in the same way as standard denosing autoencoders?

I generally see people removing the encoder part of the VAE and use the rest as a generator of data, but I was wondering if I could still use the encoder-decoder combination (trained with noisy examples in input and clean in output) to generate a denosing algorithm.

One question that comes to my mind is if the stochasticity of the VAE would prevent me from building a good denoiser.

Best Answer

In this blog post Francois Chollet gives a nice introduction to autoencoders, that is illustrated with multiple examples. The basic difference between usual autoencoder and denoising autoencoder is that the latter is trained to encode noisy inputs and decode them to noise-free outputs, while the former encodes and decodes the same data. What follows, denoising autoencoder is explicitely learned to recognize and remove noise. It wouldn't surprise me if applying autoencoder trained on noise-free data would lead to some degree of denoising, since the autoencoder would be looking for the kind of patterns that it had seen during training and amplify them. The problem may be however that it wasn't trained to ignore the noise, so some amount of noise may pass to the outputs and likely it would be more noisy then when using denoising autoencoder. I guess that results may depend on the training data you used, model specification, and the kind of noise you are expecting to see.

Related Question