Neural Networks – Understanding the Softmax Layer in Neural Networks

neural networks

I'm trying to add a softmax layer to a neural network trained with backpropagation, so I'm trying to compute its gradient.

The softmax output is $h_j = \frac{e^{z_j}}{\sum{e^{z_i}}}$ where $j$ is the output neuron number.

If I derive it then I get

$\frac{\partial{h_j}}{\partial{z_j}}=h_j(1-h_j)$

Similar to logistic regression.
However this is wrong since my numerical gradient check fails.

What am I doing wrong? I had a thought that I need to compute the cross derivatives as well (i.e. $\frac{\partial{h_j}}{\partial{z_k}}$) but I'm not sure how to do this and keep the dimension of the gradient the same so it will fit for the back propagation process.

Best Answer

I feel a little bit bad about providing my own answer for this because it is pretty well captured by amoeba and juampa, except for maybe the final intuition about how the Jacobian can be reduced back to a vector.

You correctly derived the gradient of the diagonal of the Jacobian matrix, which is to say that

$ {\partial h_i \over \partial z_j}= h_i(1-h_j)\;\;\;\;\;\;: i = j $

and as amoeba stated it, you also have to derive the off diagonal entries of the Jacobian, which yield

$ {\partial h_i \over \partial z_j}= -h_ih_j\;\;\;\;\;\;: i \ne j $

These two concepts definitions can be conveniently combined using a construct called the Kronecker Delta, so the definition of the gradient becomes

$ {\partial h_i \over \partial z_j}= h_i(\delta_{ij}-h_j) $

So the Jacobian is a square matrix $ \left[J \right]_{ij}=h_i(\delta_{ij}-h_j) $

All of the information up to this point is already covered by amoeba and juampa. The problem is of course, that we need to get the input errors from the output errors that are already computed. Since the gradient of the output error $\nabla h_i$ depends on all of the inputs, then the gradient of the input $x_i$ is

$[\nabla x]_k = \sum\limits_{i=1} \nabla h_{i,k} $

Given the Jacobian matrix defined above, this is implemented trivially as the product of the matrix and the output error vector:

$ \vec{\sigma_l} = J\vec{\sigma_{l+1}} $

If the softmax layer is your output layer, then combining it with the cross-entropy cost model simplifies the computation to simply

$ \vec{\sigma_l} = \vec{h}-\vec{t} $

where $\vec{t}$ is the vector of labels, and $\vec{h}$ is the output from the softmax function. Not only is the simplified form convenient, it is also extremely useful from a numerical stability standpoint.