MATLAB: Bad classification even after training neural network

Deep Learning Toolboxneural networks

Even after training the neural network and getting a correct classification of 98.5 percent in the confusion matrix after training. When I test it with sample data its classifying it wrongly. Any reasons for this ?
Here is the code which I ma using for training
rng('default');
load ina.mat
load inb.mat
inputs=mapminmax(ina);
targets=inb;
size(inputs);
p=inputs;
% Create a Pattern Recognition Network hiddenLayerSize = 40; net = patternnet(hiddenLayerSize);
% Choose Input and Output Pre/Post-Processing Functions % For a list of all processing functions type: help nnprocess net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'}; net.outputs{2}.processFcns = {'removeconstantrows','mapminmax'};
% Setup Division of Data for Training, Validation, Testing % For a list of all data division functions type: help nndivide net.divideFcn = 'dividerand'; % Divide data randomly net.divideMode = 'sample'; % Divide up every sample net.divideParam.trainRatio = 70/100; net.divideParam.valRatio = 15/100; net.divideParam.testRatio = 15/100;
% For help on training function 'trainlm' type: help trainlm % For a list of all training functions type: help nntrain net.trainFcn = 'trainscg'; % Levenberg-Marquardt
% Choose a Performance Function % For a list of all performance functions type: help nnperformance net.performFcn = 'mse'; % Mean squared error
% Choose Plot Functions % For a list of all plot functions type: help nnplot net.plotFcns = {'plotperform','plottrainstate','ploterrhist', … 'plotregression', 'plotfit'};
net.trainParam.max_fail = 55;
net.trainParam.min_grad=1e-10;
net.trainParam.show=10;
net.trainParam.lr=0.01;
net.trainParam.epochs=1000;
net.trainParam.goal=0.001;
% Train the Network [net,tr] = train(net,inputs,targets);
% Test the Network outputs = net(inputs); errors = gsubtract(targets,outputs); performance = perform(net,targets,outputs)
% Recalculate Training, Validation and Test Performance trainTargets = targets .* tr.trainMask{1}; valTargets = targets .* tr.valMask{1}; testTargets = targets .* tr.testMask{1}; trainPerformance = perform(net,trainTargets,outputs) valPerformance = perform(net,valTargets,outputs) testPerformance = perform(net,testTargets,outputs)
disp('after training')
y1 = sim(net,p);
y1=abs(y1);
y1=round(y1)
disp(y1)
save E:\final_new\final\net;
% View the Network
view(net)
% Plots
% Uncomment these lines to enable various plots.
%figure, plotperform(tr)
%figure, plottrainstate(tr)
figure, plotconfusion(targets,outputs);
%figure, ploterrhist(errors)

Best Answer

Two possibilities
1. The training data does not adequately characterize the total data set.
2. The net is overfit with too many weights AND the net is overtrained past the point where it
trades the ability to work well on nontraining data to further the decrease in training error.
Are you using validation stopping?
Are your training, validation and test sets randomly chosen?
What are the data division ratios?
Hope this helps.
Thank you for formally accepting my answer
Greg