I believe RBF projects the data into 3D space by centering a three dimensional bump (an un-normalized Gaussian) on top of each data point. The width of the bumps is given by the $gamma$ parameter.
These bumps overlap, so to figure out the z value at particular place you need to sum over all of the data points. If instead of $x, y$ we use $x_1, x_2$, and index all of the data points as $\mathbf{x}_i$ then the formula for to calculate the projection is:
$
z(\mathbf{x}) = \sum_{i=1}^{n} \exp\{ - \frac{ \| \mathbf{x} - \mathbf{x_i} \|^2}{2 \gamma^2 } \}
$
Where $\mathbf{x}$ and $\mathbf{x}_i$ are two dimensional vectors and $\| \mathbf{x} - \mathbf{x}_i \|$ is the Euclidean distance between them.
That is, to find the z value at each point, you need to sum across all the data points.
By solving the optimization problem of SVM in its dual form, it turns out that the dependency of the problem on the training data $\{x_i\}_{i=1}^n$ is only through their inner products. That is, you only need $\{x_i^\top x_j\}_{i, j=1}^n$ i.e., inner products of all pairs of points you have. So to train an SVM, you only need to give it the labels $Y=(y_1, \ldots, y_n)$ and a kernel matrix $K$ where $K_{ij} = x_i^\top x_j.$
Now to map each data point $x_i$ to a high-dimensional space, you apply $\phi(x)$. So the kernel matrix becomes
$$K_{ij} = \langle \phi(x_i), \phi(x_j)\rangle$$
where $\langle ,\rangle$ is just a formal notation for an inner product in a general inner product space. It can be seen that as long as we can define an inner product in the high-dimensional space, we can train SVM. We do not even need to compute $\phi(x)$ itself. We only need to compute the inner product $\langle \phi(x_i), \phi(x_j)\rangle$. This is where we set
$$K_{ij} = k(x_i, x_j)$$
for some kernel $k$ of your choice. It is known (by Moore-Aronzajn theorem) that if $k$ is positive definite, then it corresponds to some inner product space i.e., there exists a corresponding feature map $\phi(\cdot)$ such that $k(x_i, x_j) = \langle \phi(x_i), \phi(x_j) \rangle$.
To answer your question, the kernel $k(x,y)$ does not specify a projection of $x$. It is $\phi(\cdot)$ (which is usually implicit) associated with $k$ that specifies the projection. As an example, the feature map $\phi$ of an RBF kernel $k(x,y) = \exp(-\gamma \|x-y\|_2^2)$ is infinite-dimensional.
Best Answer
No. Kernel is a function that calculates dot product in the image of this mapping.
It can be thought of as defining dot product, using dot product from another space, where the mapping into this (often higher-dimensional) space is implicit.