MATLAB: Prediction of future values using narnet

Deep Learning Toolboxnarnetneuralneural networktime sries prediction

I have time series of hourly data during the period from 1980 to 2005 (219000 timesteps), and I need to predict those values for the period 2006-2012 (52560 timesteps). I have generated the code using NN Toolbox, but I need to make clear next issues: How can I get predicted values based on the created network for next 6 years? I know that closed loop is used for multi-step prediction, but the elements of resulting array yc have constant values. In fact, the first few values are different, and all the others are constant. Is the last or the first value of array yc prediction for timestep y(t+1)? How can I get predictions for additional 52559 timesteps?
I created the code by nn toolbox, and I used divideblock division since the time serie is considered.
The code:
if true
% WSin - feedback time series.
load('WSin.mat')
targetSeries = WSin;
feedbackDelays = 1:4;
hiddenLayerSize = 10;
net = narnet(feedbackDelays,hiddenLayerSize);
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
[inputs,inputStates,layerStates,targets] = preparets(net,{},{},targetSeries);
% Setup Division of Data for Training, Validation, Testing
net.divideFcn = 'divideblock'; % Divide data in blocks net.divideMode = 'time'; % Divide up every value
net.trainFcn = 'trainrp';
net.performFcn = 'mse'; % Mean squared error net.trainParam.epochs=2000;
% Train the Network
[net,tr] = train(net,inputs,targets,inputStates,layerStates);
% Test the Network
outputs = net(inputs,inputStates,layerStates); errors = gsubtract(targets,outputs); performance = perform(net,targets,outputs)
% Recalculate Training, Validation and Test Performance
trainTargets = gmultiply(targets,tr.trainMask); valTargets = gmultiply(targets,tr.valMask); testTargets = gmultiply(targets,tr.testMask); trainPerformance = perform(net,trainTargets,outputs) valPerformance = perform(net,valTargets,outputs) testPerformance = perform(net,testTargets,outputs)
% View the Network
view(net)
% Closed Loop Network
netc = closeloop(net); [xc,xic,aic,tc] = preparets(netc,{},{},targetSeries); yc = netc(xc,xic,aic); perfc = perform(net,tc,yc)
end
On this picture is the output of the open loop network, and it is ok.
On this picture is shown the output of the closed-loop network, with same input data, so I'am not sure if the next predicted value of the network is first or last, and is this should be like this?
Thank you very much in advance for helping me with this, I have tried to find an answer in earlier topics, but since I tried everything I read, I needed to ask you.
With kind regards

Best Answer

% The code:
==>When posting should reformat to have one code statement per line
if true
==> What does that statement do?
% WSin - feedback time series.
load('WSin.mat')
targetSeries = WSin;
feedbackDelays = 1:4;
hiddenLayerSize = 10;
==>For a tough problem like this you have to optimize the inputs using the significant target autocorrelation delays and a trial and error search for Hopt (keep increasing untill validation set improvement is negligible).
net = narnet(feedbackDelays,hiddenLayerSize);
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
==> Delete. That is the default.
[inputs,inputStates,layerStates,targets] = preparets(net,{},{},targetSeries);
% Setup Division of Data for Training, Validation, Testing
net.divideFcn = 'divideblock'; % Divide data in blocks
net.divideMode = 'time'; % Divide up every value
net.trainFcn = 'trainrp';
==> I assume this is used because the data set is huge.
net.performFcn = 'mse'; % Mean squared error
==> Delete. That is the default.
net.trainParam.epochs=2000;
% Train the Network [net,tr] = train(net,inputs,targets,inputStates,layerStates);
==> You forgot the final States on the LHS: [ net tr Xf Af ]
% Test the Network
outputs = net(inputs,inputStates,layerStates);
errors = gsubtract(targets,outputs);
performance = perform(net,targets,outputs)
% Recalculate Training, Validation and Test Performance
trainTargets = gmultiply(targets,tr.trainMask);
valTargets = gmultiply(targets,tr.valMask);
testTargets = gmultiply(targets,tr.testMask);
trainPerformance = perform(net,trainTargets,outputs)
valPerformance = perform(net,valTargets,outputs)
testPerformance = perform(net,testTargets,outputs)
==> Look more closely at tr. The above calculations have already been done.
% View the Network view(net) % Closed Loop Network netc = closeloop(net);
==> Test data using netc. If noticibly different than the openloop performance, train netc beginning with the current openloop weights. Review some of my closeloop examples in the NEWSGROUP
[xc,xic,aic,tc] = preparets(netc,{},{},targetSeries);
yc = netc(xc,xic,aic);
perfc = perform(net,tc,yc)
end
On this picture is the output of the open loop network, and it is ok.
</matlabcentral/answers/uploaded_files/7468/Net.jpg>
===> ERROR: The page you were looking for doesn't exist. You may have mistyped the address or the page may have moved.
On this picture is shown the output of the closed-loop network, with same input data, so I'am not sure if the next predicted value of the network is first or last, and is this should be like this?
</matlabcentral/answers/uploaded_files/7469/NetC.jpg>
===> ERROR: The page you were looking for doesn't exist. You may have mistyped the address or the page may have moved.
Thank you very much in advance for helping me with this, I have tried to find an answer in earlier topics, but since I tried everything I read, I needed to ask you.
With kind regards
OK. I saw your plots after you posted them.
Bottom line:
1. There is a limit to how far you can predict with netc. Therefore, optimization of FD and H is critical.
2. After closing loop, continue training netc initialized with existing weights from net
Thank you for formally accepting my answer
P.S. I missed this post because, for some reason, it doesn't show up when using the search word "neural"
Hope this helps.
Greg