MATLAB: How to get better test error/accuracy with neural networks

Deep Learning ToolboxMATLABneural networkneural networks

Hi,
I am new to neural networks and I'm not sure how to go about trying to achieve better test error on my dataset. I have a ~20,000×64 dataset X with ~20,000×1 targets Y and I'm trying to train my neural network to do binary classification (0 and 1) on another dataset that is 19,000×64 to achieve the best results. I currently get about .175 MSE error rate on the test performance, but I want to do better. My dataset contains values in the range of -22~10000.
I used the neural networks toolbox and used its GUI to generate a script. I've modified some of the parameters like so:
inputs = X';
targets = Y';
hiddenLayerSize = 5;
net = patternnet(hiddenLayerSize);
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
net.outputs{2}.processFcns = {'removeconstantrows','mapminmax'};
net.divideFcn = 'divideblock'; % Divide data randomly
net.divideMode = 'sample'; % Divide up every sample
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
net.trainFcn = 'trainrp'; % Scaled conjugate gradient
net.performFcn='mse';
net.plotFcns = {'plotperform','plottrainstate','ploterrhist', ...
'plotregression', 'plotfit'};
% Train the Network
[net,tr] = train(net,inputs,targets);
% Test the Network
outputs = net(inputs);
errors = gsubtract(targets,outputs);
performance = perform(net,targets,outputs)
% Recalculate Training, Validation and Test Performance
trainTargets = targets .* tr.trainMask{1};
valTargets = targets .* tr.valMask{1};
testTargets = targets .* tr.testMask{1};
trainPerformance = perform(net,trainTargets,outputs)
valPerformance = perform(net,valTargets,outputs)
testPerformance = perform(net,testTargets,outputs)
% View the Network
view(net)
I've read online and the Matlab documentation for ways to improve the performance, and it suggested that I do stuff like set a higher error goal, reinitializing the weights using the init() function, etc etc, but none that stuff is helping me to achieve better performance. Maybe I'm just not understanding how to do it correctly?
But anyways, can someone please direct me into some way in which I can achieve better accuracy? Also, could you please provide me with some code in your answer? I can't seem to understand much without looking at code.

Best Answer

1. Is there evidence that Hopt could be more than 10? i = 1:2:19
2. Since your data set is huge, why not use tic and toc to time your runs
3. Why are you complicating the code by specifying net properties and values that are already defaults?
4. Your comments say TRAINSCG, the default for patternnet, and recommended for binary outputs but your code uses TRAINRP. Are you having size problems with TRAINSCG?
5. Your Plot options are those for regression, not classification.
% see those associated with patternnet ;
net = patternnet % NO SEMICOLON
6. Your standarizations are incorrect. Use ZSCORE or MAPSTD.
Check to make sure EACH variable is standardized
7. Unfortunately, I'm not familiar with PLS (although it is the correct
function to use for classifier input variable reduction) so, some of the
following advice may be questionable
8. Are you trying to reduce 64 dimensions to 8?
9. XL and YL should be transposed
10. You could save the weights using getwb instead of or in addition to
saving the nets.
11. Save and plot the overall and trn/val/tst/ performances vs numhidden
12. Modify to calculate , save and plot, overall and trn/val/tst/ percent
classification errors.
13. To mitigate the probability of poor initial weights, consider a double loop design where the inner loop is over Ntrials different weight intializations for each value of numhidden. I use this technique almost all of the time. Search in NEWSGROUP and ANSWERS for examples using
greg patternnet Ntrials
Hope this helps.
Thank you for formally accepting my answer
Greg