I'm using Keras to implement a stacked autoencoder, and I think it may be overfitting. I wanted to include dropout, and keep reading about the use of dropout in autoencoders, but I cannot find any examples of dropout being practically implemented into a stacked autoencoder.
Where would the dropout layer(s) go, between every layer, only after the input layer, is anyone able to let me know/provide some resource implementing this?
Best Answer
We have tried adding it in few different ways:
However, we could not eliminate the overfitting completely. We have also tried adding noise to input data (Denoising AE), adding regularization (Sparse AE) to encoding layers. But still our hand made features performed better than AE created features.