I wonder why my NN results shows same performance(error) every run even though I separated data set training, validation and test sets randomly by using 'deviderand'.
I understand the weight and bias are always same because I use rng('default') code, however, I think the results must be different every time due to its randomness of data selection.
Is it because I misunderstand the concept of NN?
The code is the following:
% Solve an Input-Output Fitting problem with a Neural Network
% Script generated by Neural Fitting app
% Created 28-Dec-2016 14:15:01
%
% This script assumes these variables are defined:
%% InputT12 - input data.
% TargetT12 - target data.
rng('default')x = InputT12;t = TargetT12;% Choose a Training Function
% For a list of all training functions type: help nntrain
% 'trainlm' is usually fastest.
% 'trainbr' takes longer but may be better for challenging problems.
% 'trainscg' uses less memory. Suitable in low memory situations.
trainFcn = 'trainlm'; % Levenberg-Marquardt backpropagation.
% Create a Fitting Network
hiddenLayerSize = 10;net = fitnet(hiddenLayerSize,trainFcn);% Setup Division of Data for Training, Validation, Testing
net.divideFcn = 'dividerand';net.divideParam.trainRatio = 70/100;net.divideParam.valRatio = 15/100;net.divideParam.testRatio = 15/100;% Train the Network
[net,tr] = train(net,x,t);% Test the Network
y = net(x);e = gsubtract(t,y);performance = perform(net,t,y)% View the Network
view(net)% Plots
% Uncomment these lines to enable various plots.
%figure, plotperform(tr)
%figure, plottrainstate(tr)
%figure, ploterrhist(e)
%figure, plotregression(t,y)
%figure, plotfit(net,x,t)
Best Answer