Solved – In convolutional neural networks, how to prevent the overfitting

convolutiondeep learningdeep-belief-networksneural networksoverfitting

Given certain amount of labeled data, we define the net structure, such as number of layers, types of layers, the number of convolutional layers, the number of pooling layers, etc.

And train the parameters using back propagation, meanwhile we show the loss in training procedure and view the testing accuracy in validating data set.

But, the loss in training set is nearly zero, and the testing accuracy is kept unchanged no matter how to decrease the learning rate.

  • In this circumstance, is it overfitting?
  • Should we change the net structure?
  • More layers for more parameters?
  • Could you please recommend some suggestions or references?

Best Answer

I would say there might be a bug when calculating the errors when using backpropagation.

Also how big is your data? And how many epochs are you using for your simulation? How is your training being done? I could help you more if i could get more data.

This might help you: http://www.cs.toronto.edu/~hinton/csc2515/notes/lec4.htm

Related Question