MATLAB: Wired problem about NARX prediction

Deep Learning Toolboxnarxneural networkpredictiontime delay

Hi all,
Now I'm really confused about the NARX prediction. For example, I thought the NARX would provide y(t)=f(y(t-1),x(t-1)), however, it provides y(t-1)=f(y(t-1),x(t-1)). I used a sin function as test. When I use ao(1) as input, I thought it would give me a value similar with ao(2) as my prediction, actually, it gave me a similar value as ao(1). There's no prediction at all. I'm not sure if there's any misunderstanding here. Could anyone help me to get the real prediction work? Thank you so much.
See my below example:
t=1:1000;
a=sin(t*pi/10);
b=a;
ao=a(end-49:end);
bo=ao;
ap=a(1:end-50);
a1=mat2cell(ap,1,ones(1,length(ap)));
b1=mat2cell(ap,1,ones(1,length(ap)));
inputSeries = b1;%simplenarxInputs;
targetSeries =a1;%simplenarxTargets;
n=1;
% Create a Nonlinear Autoregressive Network with External Input
inputDelays = n:n;
feedbackDelays = n:n;
hiddenLayerSize = 10;
net = narxnet(inputDelays,feedbackDelays,hiddenLayerSize);
% Choose Input and Feedback Pre/Post-Processing Functions
% Settings for feedback input are automatically applied to feedback output
% For a list of all processing functions type: help nnprocess
% Customize input parameters at: net.inputs{i}.processParam
% Customize output parameters at: net.outputs{i}.processParam
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
net.inputs{2}.processFcns = {'removeconstantrows','mapminmax'};
% Prepare the Data for Training and Simulation
% The function PREPARETS prepares timeseries data for a particular network,
% shifting time by the minimum amount to fill input states and layer states.
% Using PREPARETS allows you to keep your original time series data unchanged, while
% easily customizing it for networks with differing numbers of delays, with
% open loop or closed loop feedback modes.
[inputs,inputStates,layerStates,targets] = preparets(net,inputSeries,{},targetSeries);
% Setup Division of Data for Training, Validation, Testing
% The function DIVIDERAND randomly assigns target values to training,
% validation and test sets during training.
% For a list of all data division functions type: help nndivide
net.divideFcn = 'dividerand'; % Divide data randomly
% The property DIVIDEMODE set to TIMESTEP means that targets are divided
% into training, validation and test sets according to timesteps.
% For a list of data division modes type: help nntype_data_division_mode
net.divideMode = 'value'; % Divide up every value
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
% Choose a Training Function
% For a list of all training functions type: help nntrain
% Customize training parameters at: net.trainParam
net.trainFcn = 'trainlm'; % Levenberg-Marquardt
% Choose a Performance Function
% For a list of all performance functions type: help nnperformance
% Customize performance parameters at: net.performParam
net.performFcn = 'mse'; % Mean squared error
% Choose Plot Functions
% For a list of all plot functions type: help nnplot
% Customize plot parameters at: net.plotParam
net.plotFcns = {'plotperform','plottrainstate','plotresponse', ...
'ploterrcorr', 'plotinerrcorr'};
% Train the Network
[net,tr] = train(net,inputs,targets,inputStates,layerStates);
% Test the Network
outputs = net(inputs,inputStates,layerStates);
errors = gsubtract(targets,outputs);
performance = perform(net,targets,outputs)
% Recalculate Training, Validation and Test Performance
trainTargets = gmultiply(targets,tr.trainMask);
valTargets = gmultiply(targets,tr.valMask);
testTargets = gmultiply(targets,tr.testMask);
trainPerformance = perform(net,trainTargets,outputs)
valPerformance = perform(net,valTargets,outputs)
testPerformance = perform(net,testTargets,outputs)
tmp=[];
for i=1:length(ao)
aao=[ao(i) nan];
inputSeries1=mat2cell(aao,1,ones(1,length(aao)));
targetSeries1=mat2cell(aao,1,ones(1,length(aao)));
% inputSeries1{end}=nan;
% targetSeries1{end}=nan;
% inputSeries1{end-1}=nan;
% targetSeries1{end-1}=nan;
%
nets=net;
%nets = removedelay(net);
%nets = closeloop(net);
[inputs,inputStates,layerStates,targets] = preparets(nets,inputSeries1,{},targetSeries1);
yp=nets(inputs,inputStates,layerStates);
yp=cell2mat(yp);
tmp=[tmp yp(1)];
end
plot(tmp(1:end-n),'b-*');
hold on
plot(ao(1+n:end),'r-o');
hold off
legend('prediction','obs');
return
% View the Network
% view(net)

Best Answer

In general, 0<= ID <= idmax and 1<= FD <= fdmax.
For prediction, 1<= ID <= idmax and 1 <= FD <= fdmax.
In other words, for prediction, make sure ID >= 1.
I do not understand your definitions of input and target.
DO NOT USE 'dividerand' for timeseries. It destroys the correlations between input, delays and and target. Use 'divideblock'(my choice) or 'divideind' (with interleaved train/val/test signals).
Unfortunately, my computer is misbehaving and I cannot run any examples.
However, you can find some of my posted examples by searching
greg closeloop