Well I think I know how to do it: in the matrix of new inputs and new targets I can go poking above the predicted outputs and by looping multiple outputs could get ahead, because as I've seen in a code, to perform 300 iterations of a one time I need to give the 300 inputs (question that is impossible because of the future inputs do not know). Now the dilemma is this: I have done an analysis of details, I've closed the loop and I checked with more data if more or less fulfilled the prediction, but unfortunately I have not been successful. For this I used about 2100 but I think data are insufficient to achieve the desired accuracy. Besides the variables are correlated but do not show much correlation (0.35 or so). In openloop get accuracy R2a = 0.82 with MSE = 0.0016 and performing the analysis of R2 in training / validation / test none of them made me overfitting. You think that you should use more data? thank you very much.
MATLAB: Precision network analysis NarX
Deep Learning Toolboxnarx
Related Solutions
The best approach for regression is to start with FITNET using as many defaults as possible. The default I-H-O node topology contains Nw = (I+1)*H+(H+1)*O unknown weights. Ntrn training examples yields Ntrneq = Ntrn*O training equations with Ntrndof = Ntrneq-Nw training degrees of freedom. The average variance in the training target examples is MSEtrn00 = mean(var(target')). Obtaining a mean-square-error lower than MSEtrngoal = 0.01*Ntrndof*MSEtrn00a/Ntrneq for Ntrndof > 0 results in a normalized DOF adjusted MSE of NMSEtrna <= 0.01 and the corresponding adjusted training Rsquared R2trna = 1-NMSEtrna >= 0.99. That is interpreted as the successful modeling of at least 99% of the variation in the target.
The training objective is to try and minimize H with the constraint R2trna >=0.99. This is usually achieved by trial and error over a double for loop with the outer loop of hidden node candidate values h = Hmin:dH:Hmax and an inner loop of i = 1:Ntrials random weight initializations. I have posted many, many examples. Search NEWSGROUP and ANSWERS using
greg fitnet Ntrials
If Ntrneq < ~2*Nw, validation stopping and/or regularization should be used to mitigate the problem of overtraining and overfit net.
The best approach to avoid overtraining is to use BOTH validation set stopping AND regularization.
HOWEVER FOR SOME STRANGE REASON, using validation stopping with TRAINBR is NOT AVAILABLE IN THE NNTOOLBOX !!!
Your choice of TRAINBR instead of FITNET is not wrong. However, you have made numerous errors, especially by not accepting as many defaults as possible.
Why not just use the syntax in
help trainbr doc trainbr
with the double loop approach?
Don't forget to initialize the RNG before the first loop so that you can duplicate results.
Hope this helps.
Thank you for formally accepting my answer
Greg
I think the network is evaluated openloop and I think it gives good results,
What are I, N, ID, FD, H, Ntrn, Nval, Ntst, R2trn, R2trna, R2val and R2tst ?
but for now I do not believe them.
Why?What do you get from the same data using closeloop?
My questions are as follows:
1 - I have evaluated the accuracy openloop, how would an in closeloop?. For a first simulation closeloop as inputs I have therefore created a matrix with the last two values InputSeries and say 30 inputs outside the training sample which would have an array of 2 * 32. To create the matrix newsTargets introduce targetSeries the last value of the array and fill with 31 NaN thereby get another matrix 1 * 32.
I don't understand where this data is coming from.
From here I simulate the network and gives me values that putting them together with real targets is a bit displaced. (I would like to send a screenshot of that graph, if you tell me in what way will send it). I think I perform the operation correctly, although I would like to evaluate the accuracy closeloop very strictly.
There is a way to see plots on ANSWERS. Fnd out how. Right now I need to see your code.
2 - My second question is because if I try the net for real, that is, suppose we are to March 4, 2013. I by a platform get data (RSI inputs and EMA), whereby I newInputs other matrix in which the last 2 introduce inputSeries values and RSI values and EMA today. (I think I'm doing well so far). To create another array newTarget do with the last value NaN targetSeries and 2 to have the same dimension as newInput (2 * 3) and newTarget (1 * 3) and performed the simulation to obtain the predicted value of the day March 5, 2013 . To continue iterating introduce the value of March 5 and get the value of the March 6, much like the network does NAR?, Is, in preparets command:
[inputs, inputStates, layerStates, targets] = preparets (net, inputSeries, targetSeries);?? and evaluate the network outside the sample.
Basically these are my doubts.
I don't understand. Post the code.
Greg
Related Question
- As predicted delayed outputs settle in NarX
- How to improve performance of a neural network
- How to use neural network for multi step ahead prediction
- Precisions obtained with network openloop NarX
- How to train a feedfordward neural network with error weights
- New questions and valuation of final results
- Normalize Inputs and Targets of neural network
Best Answer