I'll first try to share some intuition behind CNN and then comment the particular topics you listed.
The convolution and sub-sampling layers in a CNN are not different from the hidden layers in a common MLP, i. e. their function is to extract features from their input. These features are then given to the next hidden layer to extract still more complex features, or are directly given to a standard classifier to output the final prediction (usually a Softmax, but also SVM or any other can be used). In the context of image recognition, these features are images treats, like stroke patterns in the lower layers and object parts in the upper layers.
In natural images these features tend to be the same at all locations. Recognizing a certain stroke pattern in the middle of the images will be as useful as recognizing it close to the borders. So why don't we replicate the hidden layers and connect multiple copies of it in all regions of the input image, so the same features can be detected anywhere? It's exactly what a CNN does, but in a efficient way. After the replication (the "convolution" step) we add a sub-sample step, which can be implemented in many ways, but is nothing more than a sub-sample. In theory this step could be even removed, but in practice it's essential in order to allow the problem remain tractable.
Thus:
- Correct.
- As explained above, hidden layers of a CNN are feature extractors as in a regular MLP. The alternated convolution and sub-sampling steps are done during the training and classification, so they are not something done "before" the actual processing. I wouldn't call them "pre-processing", the same way the hidden layers of a MLP is not called so.
- Correct.
A good image which helps to understand the convolution is CNN page in the ULFDL tutorial. Think of a hidden layer with a single neuron which is trained to extract features from $3 \times 3$ patches. If we convolve this single learned feature over a $5 \times 5$ image, this process can be represented by the following gif:
In this example we were using a single neuron in our feature extraction layer, and we generated $9$ convolved features. If we had a larger number of units in the hidden layer, it would be clear why the sub-sampling step after this is required.
The subsequent convolution and sub-sampling steps are based in the same principle, but computed over features extracted in the previous layer, instead of the raw pixels of the original image.
What is an SVM, anyway?
I think the answer for most purposes is “the solution to the following optimization problem”:
$$
\begin{split}
\operatorname*{arg\,min}_{f \in \mathcal H} \frac{1}{n} \sum_{i=1}^n \ell_\mathit{hinge}(f(x_i), y_i) \, + \lambda \lVert f \rVert_{\mathcal H}^2
\\ \ell_\mathit{hinge}(t, y) = \max(0, 1 - t y)
,\end{split}
\tag{SVM}
$$
where $\mathcal H$ is a reproducing kernel Hilbert space, $y$ is a label in $\{-1, 1\}$, and $t = f(x) \in \mathbb R$ is a “decision value”; our final prediction will be $\operatorname{sign}(t)$. In the simplest case, $\mathcal H$ could be the space of affine functions $f(x) = w \cdot x + b$, and $\lVert f \rVert_{\mathcal H}^2 = \lVert w \rVert^2 + b^2$. (Handling of the offset $b$ varies depending on exactly what you’re doing, but that’s not important for our purposes.)
In the ‘90s through the early ‘10s, there was a lot of work on solving this particular optimization problem in various smart ways, and indeed that’s what LIBSVM / LIBLINEAR / SVMlight / ThunderSVM / ... do. But I don’t think that any of these particular algorithms are fundamental to “being an SVM,” really.
Now, how do we train a deep network? Well, we try to solve something like, say,
$$
\begin{split}
\operatorname*{arg\,min}_{f \in \mathcal F} \frac1n \sum_{i=1}^n \ell_\mathit{CE}(f(x_i), y) + R(f)
\\
\ell_\mathit{CE}(p, y) = - y \log(p) - (1-y) \log(1 - p)
,\end{split}
\tag{$\star$}
$$
where now $\mathcal F$ is the set of deep nets we consider, which output probabilities $p = f(x) \in [0, 1]$. The explicit regularizer $R(f)$ might be an L2 penalty on the weights in the network, or we might just use $R(f) = 0$. Although we could solve (SVM) up to machine precision if we really wanted, we usually can’t do that for $(\star)$ when $\mathcal F$ is more than one layer; instead we use stochastic gradient descent to attempt at an approximate solution.
If we take $\mathcal F$ as a reproducing kernel Hilbert space and $R(f) = \lambda \lVert f \rVert_{\mathcal F}^2$, then $(\star)$ becomes very similar to (SVM), just with cross-entropy loss instead of hinge loss: this is also called kernel logistic regression. My understanding is that the reason SVMs took off in a way kernel logistic regression didn’t is largely due to a slight computational advantage of the former (more amenable to these fancy algorithms), and/or historical accident; there isn’t really a huge difference between the two as a whole, as far as I know. (There is sometimes a big difference between an SVM with a fancy kernel and a plain linear logistic regression, but that’s comparing apples to oranges.)
So, what does a deep network using an SVM to classify look like? Well, that could mean some other things, but I think the most natural interpretation is just using $\ell_\mathit{hinge}$ in $(\star)$.
One minor issue is that $\ell_\mathit{hinge}$ isn’t differentiable at $\hat y = y$; we could instead use $\ell_\mathit{hinge}^2$, if we want. (Doing this in (SVM) is sometimes called “L2-SVM” or similar names.) Or we can just ignore the non-differentiability; the ReLU activation isn’t differentiable at 0 either, and this usually doesn’t matter. This can be justified via subgradients, although note that the correctness here is actually quite subtle when dealing with deep networks.
An ICML workshop paper – Tang, Deep Learning using Linear Support Vector Machines, ICML 2013 workshop Challenges in Representation Learning – found using $\ell_\mathit{hinge}^2$ gave small but consistent improvements over $\ell_\mathit{CE}$ on the problems they considered. I’m sure others have tried (squared) hinge loss since in deep networks, but it certainly hasn’t taken off widely.
(You have to modify both $\ell_\mathit{CE}$ as I’ve written it and $\ell_\mathit{hinge}$ to support multi-class classification, but in the one-vs-rest scheme used by Tang, both are easy to do.)
Another thing that’s sometimes done is to train CNNs in the typical way, but then take the output of a late layer as "features" and train a separate SVM on that. This was common in early days of transfer learning with deep features, but is I think less common now.
Something like this is also done sometimes in other contexts, e.g. in meta-learning by Lee et al., Meta-Learning with Differentiable Convex Optimization, CVPR 2019, who actually solved (SVM) on deep network features and backpropped through the whole thing. (They didn't, but you can even do this with a nonlinear kernel in $\mathcal H$; this is also done in some other "deep kernels" contexts.) It’s a very cool approach – one that I've also worked on – and in certain domains this type of approach makes a ton of sense, but there are some pitfalls, and I don’t think it’s very applicable to a typical "plain classification" problem.
Best Answer
I would hazard a guess that in this case your convolution kernels start memorizing groups of features and useful correlations between consecutive pairs of features. As a concrete example, imagine we have three features: (age, gender, education,race). You can then use, say, 1x2 convolution kernels with weights $w_1,w_2$, which slide across your features. So your kernel will extract weighted pairings:
$out1=w_1*age+w_2*gender$,
$out2=w_1*gender+w_2*education$,
$out3=w_1*education+w_2*race$.
The simplest kernel might look like $w_1=1$, $w_2=0$, in which case the outs would just be age,gender and education. Another kernel might be or $w_1=0,w_2=1$ which would get you "race" in the last out3.
When features are correlated, and the set of features is nonlinear, you might see weights like $w_1=0.6,w_2=-0.3$, and then when your out's are passed along to fully connected layers, maybe only out1 is used to capture the correlation between age and gender and your fully connected layers will manage the rest.
Usually though, convolutions are useful when your features are invariant under translation (an example is detection of faces in images), which in this case they are not. So I think your network would just use only say, out1 for the first kernel, out2 for the second kernel, etc. Again if there's some correlation, it will likely be more convoluted, or especialy if it gets stuck in a local minimum.
You might think this is useful in NLP, but the evidence here is slimmer. A great example is convolutional neural nets used for text classification here:
https://arxiv.org/abs/1502.01710.pdf
The kernels here are defined over one-hot encodings of words. However it's difficult to understand if all the network is doing is just memorizing sequences of words in a very expensive way. Example: "[blah] was delicious!" would strongly correlate with "food."
FastText was trained to do the same thing, except instead of training kernels, they literally enumerated billions of n-grams and used each as a feature in a sparse logistic classifier. The result was considerably faster (we're talking minutes vs days for training) and essentially as accurate:
https://arxiv.org/abs/1607.01759