MATLAB: Equation that compute a Neural Network in Matlab

biasesbiases neural networkequationequation neural networkespression neural networkexpressionfunction neural networkMATLABneural networkneural networksweightweights

I created a neural network matlab. This is the script:
load dati.mat;
inputs=dati(:,1:8)';
targets=dati(:,9)';
hiddenLayerSize = 10;
net = patternnet(hiddenLayerSize);
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax', 'mapstd','processpca'};
net.outputs{2}.processFcns = {'removeconstantrows','mapminmax', 'mapstd','processpca'};
net = struct(net);
net.inputs{1}.processParams{2}.ymin = 0;
net.inputs{1}.processParams{4}.maxfrac = 0.02;
net.outputs{2}.processParams{4}.maxfrac = 0.02;
net.outputs{2}.processParams{2}.ymin = 0;
net = network(net);
net.divideFcn = 'divideind';
net.divideMode = 'sample'; % Divide up every sample
net.divideParam.trainInd = 1:428;
net.divideParam.valInd = 429:520;
net.divideParam.testInd = 521:612;
net.trainFcn = 'trainscg'; % Scaled conjugate gradient backpropagation
net.performFcn = 'mse'; % Mean squared error
net.plotFcns = {'plotperform','plottrainstate','ploterrhist', 'plotregression', 'plotconfusion', 'plotroc'};
net=init(net);
net.trainParam.max_fail=20;
[net,tr] = train(net,inputs,targets);
outputs = net(inputs);
errors = gsubtract(targets,outputs);
performance = perform(net,targets,outputs)
Now I want to save the weights and biases of the network and write the equation. I had saved the weights and biases:
W1=net.IW{1,1};
W2=net.LW{2,1};
b1=net.b{1,1};
b2=net.b{2,1};
So, I've done the data preprocessing and I wrote the following equation
max_range=0;
[y,ps]=removeconstantrows(input, max_range);
ymin=0;
ymax=1;
[y,ps2]=mapminmax(y,ymin,ymax);
ymean=0;
ystd=1;
y=mapstd(x,ymean,ystd);
maxfrac=0.02;
y=processpca(y,maxfrac);
in=y';
uscita=tansig(W2*(tansig(W1*in+b1))+b2);
But with the same input input=[1:8] I get different results. why? What's wrong? Help me please! It's important!
I use Matlab R2010B

Best Answer

% VERY SORRY FOR THE ROTTEN FORMATING. THIS EDITOR SUCKS.
Equation that compute a Neural Network in Matlab
Gianvito asked about 23 hours ago
I created a neural network matlab. This is the script:
load dati.mat;
inputs=dati(:,1:8)';
targets=dati(:,9)';
% SEE DOCUMENTATION (DOC) FOR DEFAULTS OF PATTERNNET
%
% SINCE THIS IS PATTERN RECOGNITION WITH ONE OUTPUT,
% TARGETS SHOULD BE UNIPOLAR BINARY {0,1}, WITH NO NEED
% FOR TRANSFORMATIONS. THE MOST APPROPRIATE OUTPUT
% ACTIVATION IS LOGSIG. HOWEVER, PURELIN CAN BE USED
% EVEN THOUGH OUTPUTS ARE NOT RESTRICTED TO (0,1).
% TO SEE THE SIZE AND SCALE OF THE DATA
==> [ I N ] = size(inputs) % [ 8 612 ]
==> [ O N ] = size(targets) % [ 1 612 ]
==> minmaxin = minmax(inputs)
==> minmaxtarg = minmax(targets)
% TO ESTIMATE A REASONABLE VALUE FOR H
==> Ntrn = round(0.7*N) % 428
==> Neq = Ntrn*O % 428 No. of training equations
% Nw = O+(I+O+1)*H = 101 No. of unknown weights % For good weight estimates when training to convergence % require Neq >= Nw but desire Neq >> Nw % % Hub = floor((Neq-O)/(I+O+1)) % 42 Upper bound for H
hiddenLayerSize = 10; % H
% H =10 ==> Neq/Nw = 428/101 is only a factor of 4. However, % not worried because using Early Stopping (aka Stopped Training) % with a validation set.
net = patternnet(hiddenLayerSize);
%WHY ASSUME THE DEFAULT OF TRAINLM THEN CHANGE LATER? %DON'T KNOW FINAL I AND O YET BECAUSE OF CONSTANT %ROWS AND PCA DIMENSIONALITY REDUCTION
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax', 'mapstd','processpca'};
%XX: USE MAPSTD, BUT NOT MAPMINMAX WITH PROCESSPCA
% WARNING: PCA MAY NOT BE USEFUL FOR SOME PATTERN
% RECOGNITION PROBLEMS
net.outputs{2}.processFcns = {'removeconstantrows','mapminmax','mapstd','processpca'};
%XX: WHY USE PCA FOR 'ANY' OUTPUT??
%XX: WHY TRANSFORM 'THIS' OUTPUT? ISN'T IT UNIPOLAR BINARY?
net = struct(net);
%XX: WHY IS THIS NECESSARY?... DELETE?
net.inputs{1}.processParams{2}.ymin = 0;
%XX: DELETE, USE MAPSTD FOR INPUT
net.outputs{2}.processParams{2}.ymin = 0;
net.inputs{1}.processParams{4}.maxfrac = 0.02;
net.outputs{2}.processParams{4}.maxfrac = 0.02;
%XX: DON'T USE PCA FOR OUTPUTS
net = network(net);
%XX INVALID (SEE DOC). SHOULD DEFINE NET TOPOLOGY.
net.divideFcn = 'divideind';
net.divideMode = 'sample'; % Divide up every sample
net.divideParam.trainInd = 1:428;
net.divideParam.valInd = 429:520;
net.divideParam.testInd = 521:612;
net.trainFcn = 'trainscg'; % Scaled conjugate gradient backpropagation
% DEFINE IN CALL OF PATTERNET
net.performFcn = 'mse'; % Mean squared error
% UNNECESSARY. DEFAULT
net.plotFcns = {'plotperform','plottrainstate','ploterrhist', 'plotregression',
'plotconfusion', 'plotroc'};
net=init(net);
% AREN'T WEIGHTS ALREADY INITIALIZED?
net.trainParam.max_fail=20;
% PROBABLY TOO HIGH
[net,tr] = train(net,inputs,targets);
outputs = net(inputs);
errors = gsubtract(targets,outputs);
performance = perform(net,targets,outputs)
% WHAT VALUES DO YOU GET?
Now I want to save the weights and biases of the network and write the equation. I had saved the weights and biases:
W1=net.IW{1,1};
W2=net.LW{2,1};
b1=net.b{1,1};
b2=net.b{2,1};
So, I've done the data preprocessing and I wrote the following equation
max_range=0;
% WHY BOTHER SPECIFYING DEFAULTS?
[y,ps]=removeconstantrows(input, max_range);
ymin=0;
ymax=1;
[y,ps2]=mapminmax(y,ymin,ymax);
% DO NOT USE y ON BOTH SIDES OF AN EQUATION
%XX DELETE MINMAX TRANSFORMATION
ymean=0;
ystd=1;
% WHY BOTHER SPECIFYING DEFAULTS?
y=mapstd(x,ymean,ystd);
%XX x IS NOT DEFINED
maxfrac=0.02;
y=processpca(y,maxfrac);
in=y';
uscita = tansig(W2*(tansig(W1*in+b1))+b2);
% WHY NOT LOGSIG FOR OUTPUT LAYER??
But with the same input input=[1:8] I get different results.
why? What's wrong? Help me please! It's important!
I use Matlab R2010B
% HARD TO TELL. THIS CODE CANNOT RUN WITHOUT DEFINING x.
% WHAT ABOUT THE OUTPUT NORMALIZATION AND & TRANSFORMATION ?
Hope this helps.
Greg