Is it possible to train (net) as stochastic gradient descent in matlab. If possible how?
I observe that it completely ignores the previous trained data's information update the complete information. It will be helpful for large scale training. If I train the complete data, it takes very long time.
For example train iteratively 100 part of the data.
TF1 = 'tansig';TF2 = 'tansig'; TF3 = 'tansig';% layers of the transfer function , TF3 transfer function for the output layers
net = newff(trainSamples.P,trainSamples.T,[NodeNum1,NodeNum2,NodeOutput],{TF1 TF2 TF3},'traingdx');% Network created
net.trainfcn = 'traingdm' ; %'traingdm';
net.trainParam.epochs = 1000;net.trainParam.min_grad = 0;net.trainParam.max_fail = 2000; %large value for infinity
while(1) // iteratively takes 10 data point at a time. p %=> get updated with following 10 new data points
t %=> get updated with following 10 new data points [net,tr] = train(net, p, t,[], []);end
Best Answer