Neural Networks – Setting a Random Seed for Neural Network Models

machine learningmodel selectionneural networks

I'm training a neural network for a particular problem which can be predicted with 100% accuracy. However, the problem is that results tend to vary between 99% and 100% even if I train 10 different networks and take the average. I was reading around and found that it was due to random weights initialized every time the program ran. Is it okay to set the random seed to 1 and then train 10 different networks and take the average?
This will result in less variation and easy model selection.

Best Answer

You're trying to hide the problem under the rug. The variability is not a simple annoyance. If your model selection depends on the seed, then fixing the seed only hides the issue. Your models are not different enough for the sample you have, that's the bottom line.

Related Question