Solved – what does it mean if the discriminator of a GAN always returns the same value

ganoverfitting

I have a trained Generative Adversarial Networks GAN. When I input real and fake images to the discriminator, the returned value is always the same?
Is this a sort of overfitting of the discriminator?
What else can be the problem?

Best Answer

This is unfortunately one of the issues when training GANs. Intuitively, there's nothing wrong: the point of $G$(Generator) is to produce examples that $D$(Discriminator) thinks are real and not fake. So an obvious solution is for $G$ to explicitly copy a few training images, or just settle on few examples that it found which work well. In other words, there's nothing encouraging diversity in G's predictions.

The GAN minimization is theoretically done simultaneously between the generator $G$ and the discriminator $D$, but in practice must alternate between gradient descent on $G$ and $D$. The trick is to balance the alternation. If you spend too much time on minimizing $G$, then $D$ will most likely collapse to a few states.

In your link, refer to the section on "Improving sample diversity" where they discuss recent results on using minibatches to avoid collapse. In particular, see section 2 of this paper: https://arxiv.org/pdf/1606.03498.pdf

Related Question