Hi,
I've very new to Matlab and Neural Networks. I've done a fair amount of reading (neural network faq, matlab userguide, LeCunn, Hagan, various others) and feel like I have some grasp of the concepts – now I'm trying to get the practical side down. I am working through a neural net example/tutorial very similar to the Cancer Detection MATLAB example (<http://www.mathworks.co.uk/help/nnet/examples/cancer-detection.html?prodcode=NN&language=en)>. In my case I am trying to achived a 16 feature set binary classification and am evaluating the effect on training and generalisation of varying the number of nodes in the single hidden layer. For reference below, x (double) is my feature set variable and t is my target vector (binary), training sample size is 200 and test sample size is approx 3700.
My questions are: 1) I'm using patternnet default 'tansig' in both the hidden and output layers with 'mapminmax' and 'trainlm'. I'm interpretting the output by thresholding on y . 0>=0.5 The matlab userguide suggests using 'logsig' for constrained output to [0 1]. Should I change the output layer transfer function to 'logsig' or not ? I've read some conflicting suggestion with regard to doing this and that 'softmax' is sometimes suggested, but can't be used for training without configuring your own derivative function (which I don't feel confident in doing).
2) The tutorial provides a training and test dataset, directing the use of the full training set in training (i.e. dividetrain) and at the same time directs stopping training once the network achieves x% success in classifying patterns. a) is this an achieveable goal without a validation set, or are these conflicting directions? b) if achievable, how do I set 'trainParam.goal' to evaluate at x% success ? Webcrawling has led me to the answer of setting preformFcn = 'mse' and trainParam.goal = (1-x%)var(t) – does this make sense (it's seems to rely on mse = var(err) )? c) Assuming my intuition above is correct – is there an automated way of applying cross validation to a nn in matlab or will I effectively have to program in a loop? e) is there any point to this or does would a simple dividerand(200, 0.8, 0.2, 0.0) acheive the same thing ?
3) Is there an automated way in the nntoolbox of establishing the optimum number of nodes in the hidden layer ?
Thanks in advance for any and all help
Best Answer