MATLAB: Am I getting different performance results from neural network trained with 100% train, 0% validation, 0% testing

Deep Learning ToolboxNetworkneuralperformancerandomtrain

To train a neural network, I have split data into 100% train, 0% validation, 0% testing. I expect the performance result of the network to be the same across multiple runs because I am using exactly the same data for training every time.

Best Answer

This behavior is expected in Neural Network Toolbox (R2009b) in that even though data set is split in such a way that 100% train, 0% validation, 0%, performance result from that network varies across multiple runs.
There are two ways that randomness can creep into the training of neural networks. The first source is how the training, testing, and validation sets are allotted. The second source of randomness is how the initial weights and biases for the network are set up. Some sort of initial values for these parameters must be present before the iterative training takes place, and since it is rarely clear which initial values are "best", some degree of randomness to explore the space of "good guesses" is needed. In the case of feedforward networks, the Nguyen-Widrow method is used, which takes into account some a priori information to narrow down the range of "good" initial values and then randomly picks the actual initial values from the narrowed subset. If one types “edit initnw” at the MATLAB Command prompt one will see that INITNW uses RAND.
As a workaround, to get the same performance result every time, one can reset the RandStream as follows:
stream = RandStream.getDefaultStream;
reset(stream);