Solved – Activation function for output layer for unbounded values

neural networks

Are there activation functions for the output layer of a neural network whose range is $\mathbb{R}$, so that I can perform true function approximation (and not be limited to a arbitrary bounds on the output)?

I've considered using $\tanh^{-1}(2x -1)$ on the output of a sigmoid neuron since $\tanh^{-1}(x)$ has a domain of (-1, 1) and a range of $\mathbb{R}$. The problem is that $\tanh^{-1}(2\sigma(x) -1)$ is equivalent to $x/2$ (see this, so I'm not actually getting a truly unbounded function- the output of this kind of neuron is limited to [half the sum of all negative weights, half the sum of all positive weights].

Are there different combinations of output functions that can give me a truly unbounded output?

Best Answer

There's identity activation function. It'll simply output your $a^{[l]}=z^{[l]}$, where $z^{[l]}=\beta+w \cdot a^{[l-1]}$

With this one you can have a single layer NN that works like an ordinary least squares model with this linear activation.

There's a bunch of other unbounded functions such as bent identity and ReLU. The latter has a floor but not a cap. There's leaky ReLU which is unbounded. Follow my link to see a few function used by researchers (not so much in practice indeed)