Solved – Oscillating validation accuracy for a convolutional neural network

convolutiondeep learningmachine learningneural networks

My CNN training gives me weird validation accuracy result. When it comes to 2.5,3.5,4.5 epochs, the validation accuracy is higher (meaning only need to go over half of the batches and I can reach better accuracy. But, If I go over all batches (one epoch), the validation accuracy drops). I repeat this experiment several times with random subset of data and the result looks similar.

Anything wrong here? When the accuracy is fluctuating? Also, when half cycle of epoch give better accuracy?

I use adadelta to train my network

enter image description here

Best Answer

This is likely due to the ordering of your dataset. If there's many observations of the same class in a sequence the weights of the network will move too far in the direction of classifying this class.

A common cause is if you balance the classes in your dataset by resampling observations and appending them to the dataset. Shuffle your dataset - that should help you avoid the fluctuations in accuracy (and perhaps obtain a higher accuracy overall).