Solved – Backpropagation between pooling and convolutional layers

backpropagationconv-neural-networkneural networks

I've been working on understanding how convolutional neural networks by building my own implementation and trying to run a small network. So far I think I've gotten a good handle on the feed-forward through the network. I also think I have a good grasp on how to backpropagate from the fully-connected layers to the pooling layers.

Unfortunately I've been having some issues working out the backpropagation from a convolutional layer up to a pooling layer. Since the output of the pooling layer is of a different dimension than the output of the convolution layer, I'm guessing that the backprop is a full convolution of the convolutional layer's weights with the errors. Is this the correct calculation to do?

Best Answer

Although asked quite a while ago, I bumped into this question and saw it had no answers. Then found this from data science stack exchange:

https://datascience.stackexchange.com/questions/11699/backprop-through-max-pooling-layers

Quoting user104493:

"There is no gradient with respect to non maximum values, since changing them slightly does not affect the output. Further the max is locally linear with slope 1, with respect to the input that actually achieves the max. Thus, the gradient from the next layer is passed back to only that neuron which achieved the max. All other neurons get zero gradient."

Related Question