MATLAB: How to avoid negative values in the output of a feedforward net

Deep Learning Toolboxfeedforwardneural networkoutputs

Hello
I'm creating a neural feedforward net to predict hourly values of solar radiation. However, in some hours "at night" where it is supposed to generate a value of 0, a negative number is presented instead.
Output is a vector of 15 elements (one value each hour of the day) that vary from 0 to 1400.
Here is my code:
inputs = tonndata(xlsread('datosJP','inirr3'),false,false);
targets = tonndata(xlsread('datosJP','targirr3'),false,false);
net = feedforwardnet([12,8],'trainlm');
net.trainParam.lr = 0.05;
net.trainParam.mc = 0.1;
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
net.outputs{2}.processFcns = {'removeconstantrows','mapminmax'};
net.divideFcn = 'dividerand';
net.divideMode = 'time';
net.divideParam.trainRatio = 90/100;
net.divideParam.valRatio = 10/100;
net.divideParam.testRatio = 0/100;
net.performFcn = 'mse';
net = configure(net,inputs,targets);
a = 20*rand(12,size(x,2))-10;
net.IW{1} = a;
net = train(net,inputs,targets);
outputs = net(inputs);
errors = gsubtract(targets,outputs);
performance = perform(net,targets,outputs);

Best Answer

1. If your targets are bounded for a physical or mathematical reason the transfer functions logsig or tansig can be used. For documentation use the help and doc commands. For example,
help logsig
doc logsig
2. What sizes are your input and target matrices?
[ I N ] = size(inputs)
[ O N ] = size(outputs)
3. If predictions are correlated to past inputs and predictions, the best model could be a time-series network.
help narxnet
doc narxnet
4. If you stay with the static model, use the regression function FITNET which calls FEEDFORWARDNET but includes the helpful regression plot. If you want to know the default values just type, WITHOUT SEMICOLON
net = fitnet
5. It is confusing when you assign default values to parameters. For examples look at the code associated with help and doc examples.
The only parameters that I tend to change are net.divideFcn, net.trainParam.goal and net.trainParam.min_grad. Increasing the latter two can reduce training time without introducing significant errors.
6. Any good real world design is going to require looking at tens or hundreds of candidates. My policy is to first use defaults, then change parameters to improve performance.
Except for the three I mentioned above, it usually just comes down to finding out the minimum number of hidden nodes that are sufficient and designing multiple candidates that only differ by initial random weights.
Hope this helps
Thank you for formally accepting my answer
Greg