PLEASE DO NOT START A NEW THREAD WITH FOLLOWUP QUESTIONS TO ANOTHER THREAD.
> Could you please clarify the answer once again. I did not understand it.
OK:
1. There are only 2 layers 1 hidden and 1 output
2. The challenge is to choose the smallest number of hidden nodes that will yield satisfactory results.
3. For straightforward training you would like many more output training equations than unknown weights in order to mitigate errors caused by noise, measurement error, etc.
4. If this is not possible, validation stopping and regularized training can be used. The former is the NNTBX default.
> For training say 52 vectors , it could also be 100 vectors how to decide how many layers of perceptrons should I use for effective training in the following function? net=newff(final,target,9).
CORRECTION OF TERMINOLOGY AND MISUNDERSTANDING:
1. Perceptron is the name of a TYPE of network.
2. The standard multilayer perceptron (MLP) has two layers: 1 hidden and 1 output
3. Given the dimensions of input and output vectors (I and O), the main differences between standard MLPs is H, the number of hidden nodes and the values of the corresponding Nw = (I+1)*(H+1)*O connection weights.
4. With Ntrn training vector pairs, there are Ntrneq = Ntrn*O training equations.
5. If you do not use validation stopping or regularized training (MSEREG) it is desirable to have Ntreq >> Nw. Otherwise, H can be larger.
>Please give me a clarification about is there any ration to be maintained between the no. of perceptrons (CORRECTION: HIDDEN NODES) and the no of training vectors.
6. See 5.
> Thanks in advance for your help.
Hope this helps.
Thank you for formally accepting my answer.
Greg
Best Answer