CNNs – What the Convolution Step Does in a Convolutional Neural Network

conv-neural-networkconvolutiondeep learningneural networks

I am studying convolutional neural networks (CNNs) due to their applications in computer vision. I am already familiar with standard feed-foward neural networks, so I'm hoping that some people here can help me take the extra step in understanding CNNs. Here's what I think about CNNs:

  1. In traditional feed-foward NNs, we have training data where each
    element consists of a feature vector that we input to the NN in the
    "input layer," so with image recognition, we could just have each
    pixel be one input. Those are our feature vectors. Alternatively, we could manually create other — likely smaller — feature vectors.
  2. The advantage of the CNN Is that it can generate stronger feature vectors that are more invariant to image distortion and position. As the following image shows (from this tutorial), CNNs generate feature maps that are then fed to a standard neural network (so really it's a giant pre-processing step).

enter image description here

  1. The way we get those "better" features is by alternating convolution and sub-sampling. I understand how sub-sampling works. For each feature map, just take a subset of the pixels, or we can average out values of pixels.

But what I'm mainly confused on is how the convolution step works. I am familiar with convolutions from probability theory (density for the sum of two random variables), but how do they work in CNNs, and why are they effective?

My question is similar to this one but in particular, I am not sure why the first convolution step works.

Best Answer

I'll first try to share some intuition behind CNN and then comment the particular topics you listed.

The convolution and sub-sampling layers in a CNN are not different from the hidden layers in a common MLP, i. e. their function is to extract features from their input. These features are then given to the next hidden layer to extract still more complex features, or are directly given to a standard classifier to output the final prediction (usually a Softmax, but also SVM or any other can be used). In the context of image recognition, these features are images treats, like stroke patterns in the lower layers and object parts in the upper layers.

In natural images these features tend to be the same at all locations. Recognizing a certain stroke pattern in the middle of the images will be as useful as recognizing it close to the borders. So why don't we replicate the hidden layers and connect multiple copies of it in all regions of the input image, so the same features can be detected anywhere? It's exactly what a CNN does, but in a efficient way. After the replication (the "convolution" step) we add a sub-sample step, which can be implemented in many ways, but is nothing more than a sub-sample. In theory this step could be even removed, but in practice it's essential in order to allow the problem remain tractable.

Thus:

  1. Correct.
  2. As explained above, hidden layers of a CNN are feature extractors as in a regular MLP. The alternated convolution and sub-sampling steps are done during the training and classification, so they are not something done "before" the actual processing. I wouldn't call them "pre-processing", the same way the hidden layers of a MLP is not called so.
  3. Correct.

A good image which helps to understand the convolution is CNN page in the ULFDL tutorial. Think of a hidden layer with a single neuron which is trained to extract features from $3 \times 3$ patches. If we convolve this single learned feature over a $5 \times 5$ image, this process can be represented by the following gif:

enter image description here

In this example we were using a single neuron in our feature extraction layer, and we generated $9$ convolved features. If we had a larger number of units in the hidden layer, it would be clear why the sub-sampling step after this is required.

The subsequent convolution and sub-sampling steps are based in the same principle, but computed over features extracted in the previous layer, instead of the raw pixels of the original image.