MATLAB: The number of epochs, using trainlm, is so low

[neural network] [nntool] [trainlm] [xor]Deep Learning Toolbox

Hi, I'm a new user of this community and I apologize for my english at first. I'm using nntool for develop a neural network that can predict solar irradiance. Now i'm training to use this tool. I use a feedforward backpropagation with 2 input neurons, 1 output and 1 hidden layer formed by one neuron. I use trainlm algorithm and mse function error. Tha transfer function is tansig for hidden layer and output layer. The training parameters are: epochs 100 goal 0 max_fail 5 mem_reduc 1 min_grad 1e-010 mu 0.001 mu_dec 0.1 mu_inc 10 mu_max 10000000000 show 25 time inf. I initialized the weights and i have trained my neural network but after 11 epochs the training stopped with a value error about 1e-012. So the result are ok but i don't understand why the train stop. In fact the slope of the error function is good when the train stop. This happens also with XOR network. About that I have another question because after the training, output is [-1 1 1 -1] but it must be [0 1 1 0]. But the simulation works well. In this case I have used a ffbp with trainlm and the same training parameters. The network was 2-2-1 with logsig as transfer function for output and hidden layers. I hope in your explanation. Thanks.

Best Answer

% Hi, I'm a new user of this community and I apologize for my english at % first. I'm using nntool for develop a neural network that can predict % solar irradiance.
1. I assume this is not the MATLAB timeseries solar_dataset. Correct?
% Now i'm training to use this tool. I use a feedforward backpropagation with % with 2 input neurons, 1 output and 1 hidden layer formed by one neuron.
2. fitnet or feedforwardnet?
3. The input node layer contains FAN-IN-UNITS, not neurons.
% I use trainlm algorithm and mse function error. Tha transfer function is tansig % for hidden layer and output layer. The training parameters are: epochs 100 % goal 0 max_fail 5 mem_reduc 1 min_grad 1e-010 mu 0.001 mu_dec 0.1 % mu_inc 10 mu_max 10000000000 show 25 time inf.
4. To avoid confusion, just list the parameter settings that are not defaults
% I initialized the weights and i have trained my neural networkbut after 11 epochs % the training stopped with a value error about 1e-012. So the result are ok but i % don't understand why the train stop. In fact the slope of the error function is good % when the train stop. This happens also with XOR network.
5. [ net tr output error ] = train(net,input,target);
stoppingcriterion = tr.stop % No semicolon. Also try tr = tr
% About that I have another question because after the training, output is [-1 1 1 -1] % but it must be [0 1 1 0]. But the simulation works well. In this case I have used a ffbp % with trainlm and the same training parameters. The network was 2-2-1 with logsig as % transfer function for output and hidden layers. I hope in your explanation. Thanks.
6. You have missed something. There is no way to get negative output from logsig. Therefore you must be using the default normalization mapminax.
7.Although you given a pretty good explanation. It would help immensely if you posted your code.
Hope this helps.
Thank you for formally accepting my answer
Greg