MATLAB: How can i choose the parameters of the network

neural networkneural networks

Hi, I'm beginner in neural network,so to justify its using in my doctorate research, i want to do some comparatives with other tools to show the ability of this method.Now i want to create network, its input is x=[ -1 1 -1 1 -1 1 -1 1; -1 -1 1 1 -1 -1 1 1; -1 -1 -1 -1 1 1 1 1] and the ouptut for example y=[ 9 6 4 3 8 1 3 4] knowing that for example 9= a*(-1)+b*(-1)+c*(-1)*a1*(-1)*(-1)+b1*(-1)*(-1)+c1*(-1)*(-1)+d*(-1)*(-1)*(-1) the same way for others how can i choose the parameters of my network?

Best Answer

close all, clear all, clc
x0 = [ -1 1 -1 1 -1 1 -1 1; -1 -1 1 1 -1 -1 1 1; -1 -1 -1 -1 1 1 1 1]
x4 =[ 9 6 4 3 8 1 3 4]
x = [ x0; x4]
t = [ 23.72, 19.84, - 8.91 , 4.41 , -7.53, 1.41, 5.16, 0.66 ]
[I N] = size(x) % [ 4 8 ]
[O N ] = size(t) % [ 1 8 ]
Neq = N*O % 8
% Obviously, you have Neq = 8 equations.
meant2 = mean(t,2) % 4.845
vart12 = var(t,1,2) % 119.13 Biased
vart02 = var(t,0,2) % 136.15 Unbiased
% Therefore, if you try a constant model you get
y00 = repmat(mean(t,2),1,N)
MSE00 = vart12
MSE00a = vart02
% If you try a linear model solution (see my previous code) you get
W0 = t/[x;ones(1,N)] %3.3242 -2.9258 -3.9665 1.2714 -1.194
Nw0 = numel(W0) %5
Ndof0 = Neq-Nw0 %3
y0 = W0*[x;ones(1,N)]
e0 = t-y0
SSE0 = sse(e0) % 536.7
MSE0 = SSE0/Neq % 67.1
MSE0a =SSE0/Ndof0 % 178.9
NMSE0 = MSE0/MSE00 % 0.563
NMSE0a = MSE0a/MSE00a % 1.314
R20 = 1-NMSE0 % 0.437
R20a = 1-NMSE0a % -0.314
% R20 = 0.44 means that the linear model appears to account for 44% of the
% target variance and, therefore, is better than the constant model. However,
% R20a < 0 means that when the bias of testing with training data is taken
% into account, the linear model is probably worse and not much, if any,
% confidence should be put in using the R2 estimate for future data.
% In order to obtain a better coefficient model, higher order polynomial or
% neural network models can be tried. In general, however, this will result in
% decreased and negative (more unknowns than equations.) estimation degrees of
% freedom
% Apparently you have deduced a reduced term 3rd order polynomial model for
% the I/O relation. Was this done via underdetermined linear least squares and
% the coefficients for the 12 missing terms were negligible? If so, was
% regularization used?
% Neural Network Solutions:
% H = 0 hidden nodes corresponds to the linear classifier.
% H = 1 hidden node can be worse if not much better than H= 0
% H= 2 overfits the model Nw > Neq. Therefore R2a < 0. Nevertheless,
% It is interesting to see the results for Ntrials = 20 multiple designs for
% H=1 and H=2:
% H=1 result =
% Trial Nepochs R2 R2a
% 1 11 0.242 -4.309
% 2 10 0.014 -5.901
% 3 10 0.038 -5.736
% 4 11 0.260 -4.183
% 5 11 => 0.837 -0.144
% 6 8 0.427 -3.010
% 7 10 => 0.837 -0.144
% 8 33 => 0.851 -0.047
% 9 7 0.178 -4.752
% 10 17 => 0.851 -0.075
% 11 458 => 0.867 0.061
% 12 20 => 0.851 -0.047
% 13 146 0.293 -3.948
% 14 24 => 0.851 -0.047
% 15 84 0.293 -3.948
% 16 11 0.290 -3.971
% 17 16 0.293 -3.948
% 18 255 0.298 -3.914
% 19 4 0.000 -5.998
% 20 317 => 0.866 0.061
%


% maxresult =
% 20 458 0.866 0.061
% THERE ARE 7/20 R^2 RESULTS IN [ 0.837 0.867 ]
% H=2 OVERFITTING (Nw > Neq) result =
%
% 1 391 0.791
% 2 749 0.896
% 3 456 0.896
% 4 247 ==>0.998
% 5 672 0.896
% 6 7 0.046
% 7 863 0.910
% 8 19 0.860
% 9 40 ==>0.990
% 10 669 0.913
% 11 7 ==>0.992
% 12 18 0.858
% 13 14 ==>0.993
% 14 526 0.910
% 15 625 0.579
% 16 16 ==>0.991
% 17 1000 0.911
% 18 952 0.896
% 19 9 0.036
% 20 132 ==>0.993
%
% maxresult =
%
% 20 1000 0.998
%
% THERE ARE 6/20 R^2 RESULTS IN [ 0.990 0.998 ]
Hope this helps.
Thank you for formally accepting my answer
Greg