Most common convolutional neural networks contains pooling layers to reduce the dimensions of output features. Why couldn't I achieve the same thing by simply increase the stride of the convolutional layer? What makes the pooling layer necessary?
Solved – Why is max pooling necessary in convolutional neural networks
conv-neural-networkdeep learningpooling
Related Question
- Solved – a simplified version of fully convolutional network
- Deep Learning – Why Are Neural Networks Becoming Deeper but Not Wider?
- Solved – Fractional output dimensions of “sliding-windows” (convolutions, pooling etc) in neural networks
- Solved – Understanding max-pooling and loss of information
- Solved – Why does overlapped pooling help reduce overfitting in conv nets
- Max Pooling Layers in Machine Learning – Why Are They Necessary?
Best Answer
You can indeed do that, see Striving for Simplicity: The All Convolutional Net. Pooling gives you some amount of translation invariance, which may or may not be helpful. Also, pooling is faster to compute than convolutions. Still, you can always try replacing pooling by convolution with stride and see what works better.
Some current works use average pooling (Wide Residual Networks, DenseNets), others use convolution with stride (DelugeNets)