This could happen if your dataset is huge. In which cases it is preferable to train the network in mini-batches.
Classical neural networks, such as feedforward nets, do not have support for mini-batches. This can be worked around the following ways:
1) Manually implement the training in mini-batches. For this, split your dataset in mini-batches. For example, you can split your “Xgpu” and “Tgpu” into mini-batches like "mini_Xgpu{i}" and "mini_Tgpu{i}". Then set the default number of training epochs in the algorithm to 1 and have two loops: one for the desired number of epochs and another one for the iterations. Here's a rough sketch of the code for your reference.
net = feedforwardnet(10);
net.trainFcn = 'trainscg';
net.trainParam.epochs = 1;
for e=1 : nEpochs
for I = 1 : nIterations
net = train(net, mini_Xgpu{i}, mini_Tgpu{i}, 'useGPU', 'only');
end
end
2) Use preexisting deep learning functionalities. For that, you would have to transform your feedforward net into a simple deep learning network that only has 1 input layer, 1 fully connected layer, 1 custom layer and 1 output classification layer. Define the custom layer as the tansig activation function function of feedforward nets. This would reproduce a standard feedforward net.
This approach automatically uses stochastic gradient descent as the training algorithm, which works with mini-batches of data.
Hope this helps!
Best Answer