Solved – the stop criteria of generative adversarial nets

generative-modelsneural networks

I have used the GANs (Generative Adversarial Networks) with a binary cross-entropy loss function (in both generator and discriminator). Throughout the training step, the variation of generator loss and discriminator loss for 400 epochs as following:

enter image description here

Does this variation look like correct?

What is the stop criteria of gans if we use the loss function values? Can I use early stopping?

Best Answer

The loss values obtained while training a GAN are almost never reliable. In most GAN papers, there are "qualitative" and "quantitative" evaluation methods that are used in judging the performance of a GAN. The qualitative methods are generally subjective, involving human observers who are tasked with evaluating whether a sample is real or fake. In such cases, the "goodness" of the generated sample matters i.e. if the generated sample looks good then the GAN training is successful, and so on (the loss values can fluctuate even if the generator produces realistic samples). However, qualitative evaluations are biased and do not paint a complete picture (the generated images look good even in the case of mode collapse but are extremely less diverse).

Therefore, using the loss values directly is not recommended in GANs. Instead, metrics such as the Inception Score, Frechet Inception Distance (FID score), and perceptual similarity measures (LPIPS) are used for interpreting the results. To answer your question here, the above quantitative metrics can be effectively used for early stopping i.e. stopping the training when FID score worsens or perceptual similarity isn't improving, etc. A comprehensive list of metrics used in GAN evaluation is provided in this paper.

As mentioned in the previous answer, there are other GANs models such as WGAN, WGAN-LP that provide a better mathematical insight into the GAN training by calculating the Wasserstein distance, and enforcing Lipschitz continuity.

Related Question