I understand the convolutional and pooling layers, but I cannot see the reason for a fully connected layer in CNNs. Why isn't the previous layer directly connected to the output layer?
CNN Deep Learning – Fully Connected Layers Explanation
conv-neural-networkdeep learningneural networks
Related Question
- Solved – Why do we have normally more than one fully connected layers in the late steps of the CNNs
- Solved – Convolutionalizing fully connected layers to form an FCN in Keras
- Solved – Lack of Batch Normalization Before Last Fully Connected Layer
- Solved – Common activation function in fully connected layer
Best Answer
The output from the convolutional layers represents high-level features in the data. While that output could be flattened and connected to the output layer, adding a fully-connected layer is a (usually) cheap way of learning non-linear combinations of these features.
Essentially the convolutional layers are providing a meaningful, low-dimensional, and somewhat invariant feature space, and the fully-connected layer is learning a (possibly non-linear) function in that space.
NOTE: It is trivial to convert from FC layers to Conv layers. Converting these top FC layers to Conv layers can be helpful as this page describes.