close all, clear all, clc
x0 = [ -1 1 -1 1 -1 1 -1 1; -1 -1 1 1 -1 -1 1 1; -1 -1 -1 -1 1 1 1 1]
x4 =[ 9 6 4 3 8 1 3 4]
x = [ x0; x4]
t = [ 23.72, 19.84, - 8.91 , 4.41 , -7.53, 1.41, 5.16, 0.66 ]
[I N] = size(x)
[O N ] = size(t)
Neq = N*O
% Obviously, you have Neq = 8 equations.
meant2 = mean(t,2)
vart12 = var(t,1,2)
vart02 = var(t,0,2)
% Therefore, if you try a constant model you get
y00 = repmat(mean(t,2),1,N)
MSE00 = vart12
MSE00a = vart02
% If you try a linear model solution (see my previous code) you get
W0 = t/[x;ones(1,N)]
Nw0 = numel(W0)
Ndof0 = Neq-Nw0
y0 = W0*[x;ones(1,N)]
e0 = t-y0
SSE0 = sse(e0)
MSE0 = SSE0/Neq
MSE0a =SSE0/Ndof0
NMSE0 = MSE0/MSE00
NMSE0a = MSE0a/MSE00a
R20 = 1-NMSE0
R20a = 1-NMSE0a
% R20 = 0.44 means that the linear model appears to account for 44% of the
% target variance and, therefore, is better than the constant model. However,
% R20a < 0 means that when the bias of testing with training data is taken
% into account, the linear model is probably worse and not much, if any,
% confidence should be put in using the R2 estimate for future data.
% In order to obtain a better coefficient model, higher order polynomial or
% neural network models can be tried. In general, however, this will result in
% decreased and negative (more unknowns than equations.) estimation degrees of
% freedom
% Apparently you have deduced a reduced term 3rd order polynomial model for
% the I/O relation. Was this done via underdetermined linear least squares and
% the coefficients for the 12 missing terms were negligible? If so, was
% regularization used?
% Neural Network Solutions:
% H = 0 hidden nodes corresponds to the linear classifier.
% H = 1 hidden node can be worse if not much better than H= 0
% H= 2 overfits the model Nw > Neq. Therefore R2a < 0. Nevertheless,
% It is interesting to see the results for Ntrials = 20 multiple designs for
% H=1 and H=2:
% THERE ARE 7/20 R^2 RESULTS IN [ 0.837 0.867 ]
Hope this helps.
Thank you for formally accepting my answer
Greg
Best Answer