MATLAB: Improving NARX network results

Deep Learning Toolboxnarxnn modeling

I developed a NARX network for modeling a UASB reactor and predicted for three different output parameters for 11 timesteps with one-step ahead approach. While some of the predictions are well within range, some of them show unacceptable levels of difference between target and output. I used different combination in both hidden layers and delaysizes. The results did not improve. Should i incorporate something else into the code to improve the training of the neural network?? or improve the results using a filter (Kalman etc.) or use a different model (Neuro-Fuzzy or Hybrid) altogether to solve the problem?? Configuration of the network is 5-12-12-3 Training dataset consists of set of data at 100 timesteps.

Best Answer

One hidden layer is sufficient
net = narxnet(ID,FD,H)
For details, search using
greg narxnet
and
greg narx
Use the significant lags of the target autocorrelation function and the target/input crosscorrelation function to determine ID and FD.
Determine the upperbound for number of hidden nodes, Hub that guarantees the number of training equations, Ntrneq, exceeds the number of unknown weights, Nw.
Use 'divideblock' or 'divideind' to preserve the spacing between data points.
For fixed ID,FD find the minimum value for H that will yield satisfactory performance. If H << Hub is not satisfied, use a validation set or regularization (msereg, trainbr) to prevent overtraining an overfit net.
Initialize the RNG so that designs can be duplicated.
Normalize MSE by the average target variance MSE00 = mean(var(t',1)) to obtain a scale-free performance measure NMSE.
Use the training record tr to divide performance into trn/val/tst components.
Hope this helps.
Thank you for formally accepting my answer
Greg