Solved – AutoEncoders and linear activation output function

autoencodersneural networksnormalization

This is not a duplicate of the Activation functions for autoencoder performing regression because there is a comment that somebody found a linear activation function but:

  1. they never said what it was.
  2. I'm looking for a linear ouput activation function that can also output negative numbers that are less than -1

I am building an autoencoder for feature reduction and have standardized my data because I have many features with different scales. However, this has caused problems because my output activation function (tanh) only outputs between 1 and -1; while some of my inputs are outside of this range. Does anybody know of any other activation function that will output numbers greater than 1 and -1 so my loss function can work with better inputs and outputs?

I have already tried min max norm to get between 0 and 1, and using a sigmoid function. But like I said, I'm working with many features with different scales and would like to see if I can get better results with standardization.

I guess what I'm specifically looking for is a output activation function that is linear; which I've never even heard of.

Anybody?

Best Answer

In vanilla autoencoders, i.e. autoencoders with a single hidden layer, it's common to use linear activations for both the hidden and output layers. You can do it with non-linear activations for the hidden layers, but it is often imperative to use unbounded activations for the output layer, or, alternatively, transform the input to conform to the codomain of the activation function.