Solved – How to improve the neural network stability

machine learningneural networksr

I'm using the neuralnet in R to build a NN with 14 inputs and one output. I build/train the network several times using the same input training data and the same network architecture/settings.

After each network is produced I use it on a stand alone set of test data to calculate some predicted values. I'm finding there is a large variance in each iteration of the predicted data, despite all the inputs (both the training data and test data) remaining the same each time I build the network.

I understand that there will be differences in the weightings produced within the NN each time and that no two neural networks will be identical, but what can I try to produce networks that are more consistent across each train, given the identical data?

Best Answer

In general you would get more stability by increasing the number of hidden nodes and using an appropriate weight decay (aka ridge penalty).

Specifically, I would recommend using the caret package to get a better understanding of your accuracy (and even the uncertainty in your accuracy.) Also in caret is the avNNet that makes an ensemble learner out of multiple neural networks to reduce the effect of the initial seeds. I personally haven't seen huge improvement using avNNet but it could address your original question.

I'd also make sure that your inputs are all properly conditioned. Have you orthogonalized and then re-scaled them? Caret can also do this pre-processing for you via it's pcaNNet function.

Lastly you can consider tossing in some skip layer connections. You need to make sure there are no outliers/leverage points in your data to skew those connections though.

Related Question