MATLAB: Implementing initial weights and significant feedback delays in a NARNET

Deep Learning Toolboxnarnetneural networkneural networkssignificant correlationssignificant lagstime seriestutorial

Hi. I’m trying to understand the concepts behind finding training strategies for NARNETs that can make as good predictions as possible. What I want to create is a script that I can feed any time series to, regardless of how it looks, and then find the best training design for it. This is the code I have at the moment:
T = simplenar_dataset; %example time series
N = length(T); % length of time series
MaxHidden=10; %number of hidden nodes that will be tested
%Attempt to determine Significant feedback delays with Autocorrelation
autocorrT = nncorr(zscore(cell2mat(T),1),zscore(cell2mat(T),1),N-1);
[ sigacorr inda ] = find(abs(autocorrT(N+1:end) > 0.21))
for hidden=1:MaxHidden
parfor feedbackdelays=1:length(inda)
FD=inda(feedbackdelays);
net = narnet( 1:FD, hidden );
[ Xs, Xsi, Asi, Ts ] = preparets( net, {}, {}, T );
ts = cell2mat( Ts );
net.divideFcn ='divideblock'; %Divides the data using divide block
net.trainParam.min_grad=1e-15;
net.trainParam.epochs=10000;
rng( 'default' )
[ net tr Ys Es Af Xf ] = train( net, Xs, Ts, Xsi, Asi);
NMSEs = mse( Es ) /var( ts,1 )% Mean squared error performance function
performanceDivideBlockNMSEs(hidden,feedbackdelays)=NMSEs;
end
end
First off: Is this the correct way of implementing the statistically significant feedback delays?
And if the “net.divideFcn ='divideblock'” line is left uncommented as in the code now I get an error message in the loop saying “Attempted to access valInd(0); index must be a positive integer or logical.” which I’m not sure what is causing.
And I’ve heard people say that you should “try different initial weights”, how do I do that, is it the rng command I need to change?
The idea here is then that I find the address of the best performing net in the performanceDivideBlockNMSEs matrix so I can retrain a closed net with those settings and make predictions, but for now I’m just focusing on the open net.
Thanks

Best Answer

1. Unfortunately, the form of NNCORR that you are using is BUGGY!
PROOF:
a. plot(-(N-1):N-1, autocorrT)
b. minmax(autocorrT) = [ -2.3082 1.0134 ]
c. sigacorr = ones(1,41)
2. BETTER SOLUTION: Use the Fourier Method
za = zscore(a,1); zb = zscore(b,1); % a,b are double (i.e., not cells)
A = fft(za); B = fft(zb);
CSDab = A.*conj(B); % Cross Spectral Density
crosscorrFab = ifft(CSDab); % F => Fourier method
crosscorrFba = conj(crosscorrFab);
3. You might wish to compare this with the NNCORR documentation options
help nncorr
doc nncorr
% The optional FLAG determines how nncorr normalizes correlations.
% 'biased' - scales the raw cross-correlation by 1/N.
% 'unbiased' - scales the raw correlation by 1/(N-abs(k)), where k
% is the index into the result.
% 'coeff' - normalizes the sequence so that the correlations at
% zero lag are identically 1.0.
% 'none' - no scaling (this is the default).
crosscorrBab = nncorr( za, zb, N-1, 'biased' ); % B ==> "b"iased
crosscorrNab = nncorr( za, zb, N-1, 'none' )/N; % N ==> "n"one
crosscorrUab = nncorr( za, zb, N-1, 'unbiased' ); % U ==> "u"nbiased
crosscorrtMab = nncorr( za, zb, N-1 ); % M ==> "m"issing flag
% crosscorrCab = nncorr( za, zb, N-1, 'coeff' ); ERROR: BUG
You should find that B & N are equivalent, Similarly for U & M.
Therefore, there are really only 2 NNCORR options to consider: Biased and Unbiased.
It is instructive to overlay the plot combinations F&B, F&U, B&U. Most notable is that for lags greater than ~N/2 the three are, in general, quite different. Although the differences are much less for lags < N/2, I recommend using the Fourier method or one of the correlation functions from other toolboxes.
4. Once thresholding yields the "significant" lags, use as few lags and hidden nodes as possible to avoid "overfitting". Performance on non-training data tends to decrease as the ratio of number of unknown parameters to number of training equations increases.
Hope this helps.
Thank you for formally accepting my answer
Greg