Solved – Manifold learning: does an embedding function need to be well behaving

dimensionality reductionmachine learningmanifold-learning

I am trying to learn about manifold learning techniques; a family of methods in machine learning. According to this idea, there is a low ($d$) dimensional, hidden space where the real data generation mechanism lies, which has $d$ degrees of variability. But we observe the data in a high ($m$) dimensional space where $m > d$. There is a function $f:\mathbb{R^d} \to \mathbb{R^m}$ which is called embedding function which takes the data from low dimensional hidden space and maps to the high dimensional observable one as $x_i = f(\tau_i) + \epsilon_i$, where $\epsilon_i$ is a noise term. The aim here is to learn about the function $f$. All of these ideas are summed up in the following slide:

enter image description here
What I don't get exactly in this kind of method is the "embedding function" $f$. It is said that this function maps a $d$ dimensional space to a $d$ dimensional manifold in a higher, $m$ dimensional space. Is it a mathematical fact that such a function $f:\mathbb{R^d} \to \mathbb{R^m}$ should always generate a $d$ dimensional manifold in its range space? I think it is not the case, since such a function can be a much more general one, mapping its input to irrelevant locations in its range.

So is it just an assumption of the approach that this $f$ function is well behaving, in the sense that it maps a low dimensional space more or less to a manifold of the same dimension in a high dimensional observation space? Or is it a mathematical fact? How should I interpret this?

Best Answer

It is indeed an assumption that the function $f$ is "well behaving".

If the function $f:\mathbb R^d \to \mathbb R^m$ were allowed to be arbitrary, then its image $f[\mathbb R^d] \subset \mathbb R^m$ in the target space could have any number of dimensions between $0$ and $m$. The image can e.g. be the whole target space $\mathbb R^m$ (dimensionality $m$), a single point (dimensionality zero), any weird subspace with fractional fractal dimensionality, or whatever.

But if the function $f$ is smooth (i.e. it is continuous and has continuous derivatives of all orders; mathematicians write $f \in C^\infty$), then the image of $f$ will be a differentiable manifold of dimensionality $d$. I think this assumption is implicit in the figure you provided, because the image of $f$ is displayed there as a nice curly surface which is obviously supposed to be "smooth".

Perhaps it is enough that $f$ is continuously differentiable (i.e. $f \in C^1$), which is a weaker requirement (as it assumes nothing about higher derivatives).