Multivariable Calculus – Local Parametrizations and Local Flatness in Submanifolds

manifoldsmultivariable-calculussmooth-manifolds

I am trying proving the four definitions for submanifolds of Euclidean space providing in https://www.mathematik.uni-muenchen.de/~tvogel/Vorlesungen/TMP/skript-TMP.pdf are equivalent (I came to this reference for reading the book Vector Calculus, Linear Algebra, and Differential Forms of J.H. Hubbard and B.B. Hubbard.):

For equivalent definitions of the notion of submanifold of dimension $k\in \mathbb{N}^+ = \{1,2,…\}$. In all four of them, $M \in \mathbb{R}^n$. And in all all four of them, smooth means belonging to $C^l$ for some positive integer $l$, or infinitely differentiable.

Condition (a) Local parametrizations: For all $p \in M$ there is an open set $U \in \mathbb{R}^k$, a neighborhodd $V \in \mathbb{R}^n$ of $p$ and a smooth map $\varphi: U \rightarrow \mathbb{R}^n$ such that

  1. $\varphi$ is a homeomorphism onto $V \cap M$, and
  2. for all $x \in U$ the differential $D_x\varphi: \mathbb{R}^k \rightarrow \mathbb{R}^n$ is injective.

Condition (b) Locally flat: For all $p \in M$ there are an open neighbourhood $V \subset \mathbb{R}^n$ of $p$ and $W \subset \mathbb{R}^n$ and a diffeomorphism $\phi: V \rightarrow W$ such that

  1. $\phi(p) = 0$ and
  2. $\phi(V\cap M) = (\mathbb{R}^k \times \{0\in \mathbb{R}^{n-k}\}) \cap W$

Condition (c) Locally regular level set: For all $p \in M$ there is an open neighborhood $V$ and a smooth function $F: V \rightarrow \mathbb{R}^{n-k}$ such that

  1. $F^{-1}(0) = V\cap M$, and
  2. for all $q \in M \cap V$ the differential $D_q F : \mathbb{R}^n \rightarrow \mathbb{R}^{n-k}$ is surjective

Condition (d) Locally a graph: For all $p \in M$ there is an open neighbourhood $V \in \mathbb{R}^{n}$ and an open subset $U \in \mathbb{R}^k$ and a smooth function $g: U \rightarrow \mathbb{R}^{n-k}$ and a permutation $\sigma \in S_n$ such that

  1. $V\cap M = \{(y_{\sigma(1)}, y_{\sigma(2)}, \dots, y_{\sigma(n)})| y = (x, g(x)) \text{ where } x \in U\}$

I found that proving their equivalence in details seems quite tedious. I tried to restate the proof of $a \Rightarrow b$ providing on the book Calculus on manifolds by Michael Spivak as follows, because I think the proof in it is missing some details.

Could someone please check if the proof is missing any details or if there is some more elegant proof? Thanks!

Proof (a) => (b) in details Denote $x_0 = \varphi^{-1}(p)$.
By item 2 of Condition (a), $D_p \varphi : \mathbb{R}^k \rightarrow \mathbb{R}^n$ is injective, hence its Jacobian matrix:

$$
D_x\varphi(x_0)=
\left(\begin{array}{cccc}
\frac{\partial \varphi_{1}}{\partial x_{1}} & \frac{\partial \varphi_{1}}{\partial x_{2}} & \cdots & \frac{\partial \varphi_{1}}{\partial x_{k}} \\
\frac{\partial \varphi_{2}}{\partial x_{1}} & \frac{\partial \varphi_{2}}{\partial x_{2}} & \cdots & \frac{\partial \varphi_{2}}{\partial x_{k}} \\
\vdots & \vdots & \ddots & \vdots \\
\frac{\partial \varphi_{n}}{\partial x_{1}} & \frac{\partial \varphi_{n}}{\partial x_{2}} & \cdots & \frac{\partial \varphi_{n}}{\partial x_{k}}
\end{array}\right) \bigg\rvert_{(x_1, x_2, \cdots, x_k) = x_0}
$$

has rank $k$. Hence there are positive integers $1 \leq r_1 \leq r_2 \leq \dots \leq r_k \leq n$ such that

$$
\left(\begin{array}{cccc}
\frac{\partial \varphi_{r_1}}{\partial x_{r_1}} & \frac{\partial \varphi_{r_1}}{\partial x_{2}} & \cdots & \frac{\partial \varphi_{r_1}}{\partial x_{k}} \\
\frac{\partial \varphi_{r_2}}{\partial x_{1}} & \frac{\partial \varphi_{r_2}}{\partial x_{2}} & \cdots & \frac{\partial \varphi_{r_2}}{\partial x_{k}} \\
\vdots & \vdots & \ddots & \vdots \\
\frac{\partial \varphi_{r_k}}{\partial x_{1}} & \frac{\partial \varphi_{r_k}}{\partial x_{2}} & \cdots & \frac{\partial \varphi_{r_k}}{\partial x_{k}}
\end{array}\right) \bigg\rvert_{(x_1, x_2, \cdots, x_k) = x_0}
$$

has nonzero determinant. Let $\sigma$ be any permutation of $\{1,2,\cdots, n\}$ such that $\sigma(1) = r_1, \sigma(2) = r2, …,\sigma(k) = r_k$.

And define $P: \mathbb{R}^n \rightarrow \mathbb{R}^n$ by $P(y_1, y_2, \cdots, y_n) = (y_{\sigma(1)}, y_{\sigma(2)}, \cdots, y_{\sigma(n)}) = (y_{r_1}, y_{r_2}, \cdots, y_{r_k}, \cdots)$. Then obviously(In fact I think the proof of it is quite tedious. Though it do be just some permutation of the coordinates.) $P$ is a diffeomorphism of $\mathbb{R}^n$. And
$$
D_x(P \circ \varphi) = \left(\begin{array}{cccc}
\frac{\partial \varphi_{r_1}}{\partial x_{1}} & \frac{\partial \varphi_{r_1}}{\partial x_{2}} & \cdots & \frac{\partial \varphi_{r_1}}{\partial x_{k}} \\
\frac{\partial \varphi_{r_2}}{\partial x_{1}} & \frac{\partial \varphi_{r_2}}{\partial x_{2}} & \cdots & \frac{\partial \varphi_{r_2}}{\partial x_{k}} \\
\vdots & \vdots & \ddots & \vdots \\
\frac{\partial \varphi_{r_k}}{\partial x_{1}} & \frac{\partial \varphi_{r_k}}{\partial x_{2}} & \cdots & \frac{\partial \varphi_{r_k}}{\partial x_{k}} \\
\vdots & \vdots & \vdots & \vdots
\end{array}\right) .
$$

The first $k$ row of it's value at $x_0=\varphi^{-1}(p)$ has nonzero determinant by the previous assumption.
Since $\varphi$ is smooth, so is $P \circ \varphi$. So there is a neighborhood $U_1 \subset U$ of $x_0$ such that $D_x(P \circ \varphi)$ has nonzero determinant is it. Define $\Phi: U_1 \times \mathbb{R}^{n-k} \rightarrow \mathbb{R}^n$ by
$$
\Phi(x,y) = P(\varphi(x)) + (0, y)
$$

where $x \in U, y \in \mathbb{R}^k$ and $(0, y)$ means that the first $k$ coordinates are zero and the last $n-k$ coordinates is $y$.

Then

$$
D\Phi =
\left(\begin{array}{cccccccc}
\frac{\partial \varphi_{r_1}}{\partial x_{1}} & \frac{\partial \varphi_{r_1}}{\partial x_{2}} & \cdots & \frac{\partial \varphi_{r_1}}{\partial x_{k}} & 0 & 0 & \cdots & 0 \\
\frac{\partial \varphi_{r_2}}{\partial x_{1}} & \frac{\partial \varphi_{r_2}}{\partial x_{2}} & \cdots & \frac{\partial \varphi_{r_2}}{\partial x_{k}} & 0 & 0 & \cdots & 0\\
\vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots\\
\frac{\partial \varphi_{r_k}}{\partial x_{1}} & \frac{\partial \varphi_{r_k}}{\partial x_{2}} & \cdots & \frac{\partial \varphi_{r_k}}{\partial x_{k}} &0 & 0 & \cdots & 0 \\
\vdots & \vdots & \vdots & \vdots & 1 & 0 & \cdots & 0 \\
\vdots & \vdots & \vdots & \vdots & 0 & 1 & \cdots & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\
\vdots & \vdots & \vdots & \vdots & 0 & 0 & \cdots & 1
\end{array}\right)
$$

has nonzero determinant on $U_1 \times \mathbb{R}^{n-k}$. Define $\Phi_1: U_1 \times \mathbb{R}^{n-k} \rightarrow \mathbb{R}^n$ by $\Phi_1 = P^{-1} \circ \Phi$, then its Jacobian matrix has nonzero determinant on $U_1 \times \mathbb{R}^{n-k}$, and $\Phi_1(x_0, 0) = P^{-1}(P(\varphi(x_0))+(0,0)) = \varphi(x_0) = p$.($P^{-1}$ is linear and $\Phi_1(x,y) = P^{-1}(P(\varphi(x))+(0,y)) = \varphi(x)+P^{-1}(0,y)$. Define $\Phi_1$ directly seems very hard to write the statement.)
By the inverse function theorem, there is neighborhood $W_1 \subset U_1 \times \mathbb{R}^{n-k}$ of $(x_0, 0)$ and neighborhood $V_1$ of $p$ and the restriction $\Phi_1: W_1 \rightarrow V_1$ is a diffeomorphism (Normally the textbook state the inverse function theorem without saying the inverse function is $C^{k}$ if the original function is $C^{k}$. But it do be seems the Jacobian matrix of the inverse function is a raitional function of the Jacobian matrix of the original function. Ref: Inverse function theorem: how show $F \in C^k \Rightarrow F^{-1} \in C^k$ with this method? maybe.).

Since $\varphi: U \rightarrow V\cap M$ is a homeomorphism, and $W_1 \cap (\mathbb{R}^k \times \{0\in \mathbb{R}^{n-k}\})$ is a open neighborhood of $x_0$(In fact there is still a homeomorphism between them, but stating it clearly is very tedious.) and subset of $U_1 \subset U$. Hence $\Phi_1(W_1 \cap (\mathbb{R}^k \times \{0\in \mathbb{R}^{n-k}\})) = V_2 \cap M$ for some $V_2$(Is there some counterexample to show that $V_2$ cannot be $V_1$?). Let $V_3 = V_1 \cap V_2$, and $W_3 = \Phi_1^{-1}(V_3)$, and $\phi_3: V_3 \rightarrow W_3$ be the restriction of $\Phi_1^{-1}$ on $V_3$. Then $\phi_3$ satisfies:

  1. $\phi_3: V_3 \rightarrow W_3$ is diffeomorphism.
  2. $\phi_3(p) = (x_0, 0)$
  3. $\phi_3(V_3 \cap M) = W_3 \cap (\mathbb{R}^k \times \{0\in \mathbb{R}^{n-k}\})$

Take $\phi_4(p) = \phi_3(p) – (x_0, 0)$. Then $V_3$, $\phi_4(V_3)$, $\phi_4$ is the $V, W, \phi$ in condition (b). ⬛

Though the idea seems quite simple. The proof is so tedious, and the part for proving that $\phi_3(V_3 \cap M) = W_3 \cap (\mathbb{R}^k \times \{0\in \mathbb{R}^{n-k}\})$ seems not right or well stated. Could someone help to check the proof or provide some more rigorous proof please? And any hint for proving the other directions is greatly appreciated too! (Though the keypoint as I know is the inverse/implicit theorem, but using it to state a rigorous proof of the equivalent of the above four definition seems still quite hard for me.) Thanks!

Best Answer

What I shall prove below is the implications $(d)\implies (c)\implies (b)\implies(a)\implies (d)$, because I think it's the most efficient. If you want to reverse the order of the implications, I'm sure you can do so with slight modifications. In what follows, I'm not going to explicitly write out the coordinate permutations; I'll just mention when we must permute them. Also, note that the $U,V,W$ from each statement are not the same, so sometimes I may use a prime to indicate a slightly smaller set.

Also, one final obligatory remark: the proofs below are definitely not the only way of doing things. The inverse and implicit function theorem (and also the constant rank theorem) are equivalent theorems, so it is a matter of choice as to which theorem one prefers to invoke. I just happen to like the inverse function theorem more than the implicit function theorem, which is why I used that more in the proofs below. Also, regarding regularity, in the inverse function theorem, the local inverse has the same degree of smoothness as the original one, and likewise, for the implicit function theorem, the implicit function also has the same degree of smoothness.


(d) $\implies$ (c):

I think this is trivial so I leave it to you.


Proof (c) $\implies$ (b):

Given $p\in M$, by assumption, there is an open $V\subset \Bbb{R}^n$ containing $p$ and a smooth map $F:V\to\Bbb{R}^{n-k}$ such that $F^{-1}(\{0\})=M\cap V$ and for each $q\in F^{-1}(\{0\})$, $DF_q$ is surjective. Since $DF_p$ is surjective, it means the matrix $F'(p)$ has an $(n-k)\times (n-k)$ submatrix which is invertible; so we may as well permute the coordinates and write them as $(x,y)\in V$ so that $\frac{\partial F}{\partial y}(p)$ is invertible (like I mentioned in the comments, this is the sort of thing you should write out explicitly in full detail once, and then never again, because it makes the rest of the proof clunky). Define $h:V\subset \Bbb{R}^k\times\Bbb{R}^{n-k}\to\Bbb{R}^k\times\Bbb{R}^{n-k}$ as \begin{align} h(x,y):=(x,F(x,y)) \end{align} Then, we have \begin{align} h'(p)&= \begin{pmatrix} I_k& 0\\ \frac{\partial F}{\partial x}(p)& \frac{\partial F}{\partial y}(p) \end{pmatrix} \end{align} Due to the structure of this block matrix, it is evident that it is invertible. Hence, by the inverse function theorem, there exists an open neighborhood $V'$ of $p$ in $V$, and an open neighborhood $W$ of $h(p)=(p_1,\dots, p_k,F(p))=(p_1,\dots, p_k,0_{\Bbb{R}^{n-k}})$ such that the restriction $\phi:=h|_{V'}:V'\to W$ is a diffeomorphism. Note that $\phi[M\cap V']=W\cap (\Bbb{R}^k\times \{0\})$, because

  • if $(x,y)\in \phi[M\cap V']$, then it means there exist $(a,b)\in M\cap V'$ such that $(x,y)=\phi(a,b):=h(a,b):=(a,F(a,b))$. But recall that $M\cap V'\subset M\subset V$, is the zero set of $F$, hence $(x,y)=(a,0)$. This proves the inclusion $\phi[M\cap V']\subset W\cap (\Bbb{R}^k\times \{0\})$.
  • Conversely, if $(x,y)\in W\cap (\Bbb{R}^k\times \{0\}):=\phi[V']\cap \phi[V']\cap (\Bbb{R}^k\times \{0\})$, then for some $(a,b)\in V'$ (namely $(a,b)=\phi^{-1}(x,y)$), we have $(x,y)=(x,0)=\phi(a,b):=(a,F(a,b))$. This means $F(a,b)=0$, and thus $(a,b)\in F^{-1}(\{0\})=M\cap V$. But we already know that $(a,b)\in V'$, therefore $(a,b)\in M\cap V'$ (since $V'\subset V$ by definition). This proves the reverse inclusion.

I should remark that I only wrote out all these details for the sake of completeness. If you think for a moment, these last two bullet points, which prove that $\phi[M\cap V']=\phi[V']\cap (\Bbb{R}^k\times \{0\})$, are pretty obvious from the definition of $\phi$.


Proof (b) $\implies$ (a):

Let $V,p,W,\phi$ be as in (b). Now, consider $\theta:\Bbb{R}^k\to\Bbb{R}^k\times \Bbb{R}^{n-k}$, $t\mapsto (t,0)$ and define $U:= \theta^{-1}(W)$; this is an open subset of $\Bbb{R}^k$. Now, consider $\psi:=\phi^{-1}\circ \theta:U\to V$; this is the desired parametrization because

  • $\psi$ is injective since $\phi^{-1}$ and $\theta$ are. Also, at every point $\psi$ has injective derivative since $\phi^{-1}$ and $\theta$ do.
  • $\psi[U]=\phi^{-1}[\theta[U]]=\phi^{-1}\left(W\cap (\Bbb{R}^k\times \{0\})\right)=M\cap V$.
  • $\theta$ is a homeomorphism of $\Bbb{R}^k$ onto its image, which is $\Bbb{R}^k\times \{0\}$, therefore $\psi=\phi^{-1}\circ \theta$ is also a homeomorphism onto its image.

Proof (a) $\implies$ (d):

Let $\psi:U\subset \Bbb{R}^k\to M\cap V\subset V\subset \Bbb{R}^n$ be a local parametrization of $M$ with $\psi(0)=p$. This implication has a bit in common with what you've done for $(a)\implies (b)$. Now, since $D\psi_0$ is injective, its $n\times k$ matrix representation $\psi'(0)$ has $k$ linearly independent rows. So, by a permutation of the target space of $\psi$, we can write $\psi:U\to V\subset \Bbb{R}^k\times \Bbb{R}^{n-k}$, $\psi(t):=(\psi_1(t),\psi_2(t))$, such that $(\psi_1)'(0)$ is an invertible $k\times k$ matrix. By the inverse function theorem, there exist open neighborhoods $U'$ of $0$ in $U$, and $A$ of $\psi_1(0)$ in $\Bbb{R}^k$, such that the restriction $\psi_1:U'\to A$ is a diffeomorphism.

Now, since $\psi$ is a homeomorphism of $U$ onto its image $M\cap V$, we can write $\psi[U']=M\cap V'$ for some open $V'\subset V\subset \Bbb{R}^n$. Now, consider the smooth function $g=\psi_2\circ (\psi_1|_{U'})^{-1}:A\to \Bbb{R}^{n-k}$. Note that \begin{align} \text{$(x,y)\in M\cap V'=\psi[U']$}&\iff \text{there is a $t\in U'$ such that $(x,y)=(\psi_1(t),\psi_2(t))$}\\ &\iff \text{$x\in A$ and $y=(\psi_2\circ (\psi_1|_{U'})^{-1})(x)$}\\ &\iff \text{$x\in A$ and $y=g(x)$} \end{align} This shows $M\cap V'=\text{graph}(g)$, thereby completing the proof.


Note that the last implication should be intuitively obvious. For example, if we try to parametrize part of the unit circle in the obvious way as $\psi(t)=(\cos t, \sin t)$, then for $t$ in a certain domain, we can solve the system $(x,y)=(\cos t, \sin t)$ as $t=\arccos(x)$, and hence $y=\sin(\arccos(x))=\sqrt{1-\cos^2(\arccos(x))}=\sqrt{1-x^2}$ (again being careful with the domains etc). So, we have expressed $y$ as a function of $x$ and now the set of points $(x,y)$ is simply the graph of the mapping $x\mapsto \sqrt{1-x^2}$.

In this special case, we were able to do things explicitly because we know that (by restricting the domain appropriately) $\cos$ has an inverse function. In the general case though, it is the inverse function theorem which does the heavy-lifting for us. To get a really good understanding of what each implication is saying, I suggest you follow along line by line with the circle example (for which you have an explicit description in each of the four formats).

Related Question