MATLAB: Neural Network result offset by one

Deep Learning ToolboxMATLABnarxnetneural networks

I am learning the Neural Networks toolbox and I worked through a simple example using narxnet to predict the output of a sine function:
proffset = 5; % target offset to train for prediction
tdelay = 2; % length of delay line
x = [1:tstep:15]; % create the example function
y = 2*sin(x)+1; % y = f(x(t)), x(t)=t
xo = x(1:end-proffset); % training inputs, x(t)
yo = y(proffset+1:end); % training targets, y(t-T)
net = narxnet(1:tdelay,1:tdelay,10); % default values
[Xs,Xi,Ai,Ts,Ew,tshift] = preparets(net,num2cell(xo),{},num2cell(yo));
net = train(net,Xs,Ts,Xi,Ai);
[yn, xn, an] = net(Xs,Xi,Ai);
plot(xo(tdelay+1:end),cell2mat(yn),'o-g');
This works fine. The outputs match the targets very closely, as expected for a simple function.
However, when I use my real data in this code framework, the output results are clearly shifted by -1, even though the number of outputs is correct (i.e., number of outputs = number of targets – length of delay line).
The example seems very straightforward, and I can't figure out why "real" data would produce this kind of behavior. What can cause this kind of offset? Thank you!

Best Answer

1. Since I always use
a. t and y to denote target and output
b. subscripts "o" and "c" to denote "o"penloop
and "c"loseloop,
I had to change notation to prevent my confusion.
2. What does the pr in proffset represent?
3. Shouldn't your comment concern training input x(t) and training target y(t+T) (instead of y-T) so that x(1) predicts y(1+T) ? I think this may have caused your confusion.
4. The default 'dividerand' causes interpolation (at (sometimes VERY) nonconstant timesteps). To get a good understanding use the diff function to determine the spacings among the validation and test data for this simple example:
[trnind valind tstind]= dividerand(100,0.7,0.15,0.15);
valspacing = diff(valind)
testspacing = diff(tstind)
Surprised?
5. For unbiased timeseries prediction
testInd = [ Ntrn + Nval + 1: N ]
This can be accomplished in many ways
a. Divideblock: [ Ntrn, Nval, Ntst ] %Includes Nval = 0
b. Divideint,divideind or dividerand on 1:Ntrn+Nval
Obviously, only divideblock provides accurate spacing.
close all, clear all, clc, plt=0, tic
dx = 0.1, x0 = [ 1: dx: 15 ]; t0 = 2*sin( x0 ) + 1;
[ I N0 ] = size(x0), [ O N0 ] = size(t0) % [ 1 141 ]
offset = 5, N = N0 - offset % 136
x = x0(1:end - offset ); t = t0(offset + 1 : end );
X = con2seq(x); T = con2seq(t);
d=2, ID = 1:d, FD = 1:2, H = 10 % NARXNET defaults
neto = narxnet;
% neto.divideFcn = 'divideblock'; % For prediction
[ Xo, Xoi, Aoi, To ] = preparets( neto, X, {}, T );
to=cell2mat(To);varto=var(to,1) % 2.0492 Reference MSE
% Desire MSEo/varto <= 0.005 before closing loop
Ntrials = 12
rng('default')
for i =1:Ntrials
state(i) = rng;
neto = configure(neto, Xo, To);
[neto tro Yo Eo Xof Aof]=train(neto,Xo,To,Xoi,Aoi);
% [Yo Xof Aof ] = net(Xo,To,Xoi,Aoi);
% Eo = gsubtract(To,Yo)
NMSEo(i) = mse(Eo)/varto;
end
[ minNMSEo imin] = min(NMSEo) % [ 1.8972e-06 8 ]
result = NMSEo
% result = 0.022016 0.0024273 0.0026271 3.2378e-5
% 0.010759 0.15988 0.0074602 1.8972e-6
% 0.000851 0.79191 0.0008976 0.0064785
Hope this helps.
Thank you for formally accepting my answer
Greg