MATLAB: Very good in training very bad in predictions (neural network)

anngenetic algorithmMATLABneural network

Dear all
I buld network with 5*100 input and 1*100 as a target and when I train network the result is very good but when I test it it give me bad result all my data is randomised and normaliz between 1,-1
I tried many network and try different neorn but no change
please help
my code is:
net = newff(p, t, 10, {'logsig', 'purelin'});
net = init(net);
net.divideParam.trainRatio = 75/100;
net.divideParam.testRatio = 15/100;
net.divideParam.valRatio = 10/100;
net.trainParam.epochs = 20;
net.trainParam.goal = 0.000001;
net.trainParam.max_fail = 6;
net.trainParam.lr = 0.06;
net.performParam.regularization = 0.008;
[net tr] = train(net,p,t);
a=sim(net,test);
postreg(a,tt);

Best Answer

newff is obsolete. Use fitnet instead.
I think you wasted too much time on finding parameters. Typically, you should use defaults for everything except
1. Set the initial state of the random generator with your favorite seed. For example
rng('default')
2. Search for the smallest acceptable number of hidden nodes using the outer loop search
h = Hmin:dH:Hmax
3. Search for a suitable combination of random initial weights and train/val/test data divisions using the inner loop search
i = 1:Ntrials
4. Typically, I start with ~ 10 values of h and 10 weight/datadivision trials for each value of h.
5. The documentation example for fitnet (the one for newff is similar)
help fitnet
doc fitnet
[x,t] = simplefit_dataset;
net = fitnet(10);
net = train(net,x,t);
view(net)
y = net(x);
perf = perform(net,t,y)
6. However, in general, my approach yields fewer failures. I have posted scores of examples in the NEWSGROUP and ANSWERS. Try searching with (or newff)
greg fitnet Ntrials
7. Two very important things to understand are that
a. Increasing the number of hidden nodes makes it easier to obtain
a solution. However, the smaller the number of hidden nodes,
the better the net resists noise, interference, measurement
errors and transcription errors. Just as (or more) importantly,
the net performs better on nontraining (validation, test and
unseen) data.
b. Because initial weights and datadivision are random, the design
may fail when all of the other parameters are perfect. Thus the
double loop search.
Hope this helps.
Thank you for formally accepting my answer
Greg