Solved – Basic Question: how does feed-forward neural network solve regression

neural networks

This is a fairly basic question but I can't seem to find an answer on the net (perhaps I'm searching the wrong things).

Regression is trying to predict continuous outputs. Since a neural network uses a clamping function (typically giving a value between 0 and 1) before the output.. if the output can only be between 0 and 1, how can a neural network learn a regression function?

Thanks in advance

Best Answer

The usual solution is to provide an output layer which uses a linear neuron rather than the traditional sigmoid or tanh. This achieves the necessary infinite range from a finite range. A linear neuron simply outputs a linear combination of its inputs, usually with an added bias parameter for a constant term. The weights in the linear function and the bias are learnt by the network during training.

Related Question