[Math] Jacobian of (f,g) is identically zero if and only if f = h ∘ g

derivativesjacobianvector analysis

Suppose you have smooth functions $f,g : \mathbb{R}^2 \rightarrow \mathbb{R}$. I am wondering whether the following conjecture is true:

Conjecture: The Jacobian determinant $\left|\frac{\partial(f,g)}{\partial(u,v)}\right|$ is zero everywhere if and only if there exists a function $h:\mathbb{R}\rightarrow \mathbb{R}$ such that $f \equiv h\circ g$.

This direction $(\Leftarrow)$ is easy using the chain rule. I'm wondering whether the converse direction $(\Rightarrow)$ holds as well. I'm not sure how to proceed, but I note that one immediate consequence of the Jacobian being zero everywhere is that the gradients of $f$ and $g$ are parallel everywhere:

$$\nabla f = \alpha(u,v) \nabla g.$$

It seems to follow that the level curves of $f$ and $g$ are parallel as well. So, intuitively, to make f and g coincide, maybe it is possible to simply scale $f$ by an amount that depends on the value of $g$ at the point (i.e. the value of the level curve passing through the point). This would mean that there exists a function $h$ such that f = h ∘ g?


Update: I would like to update the conjecture to exclude trivial reasons for the determinant vanishing, e.g. because exactly one of $f$ or $g$ is constant. Later, it may also be advantageous to exclude cases such as $f_v = g_v = 0$.

Conjecture (Revised): Let $f,g:\mathbb{R}^2\rightarrow \mathbb{R}$ be smooth functions, and furthermore suppose that the gradients of $f$ and $g$ exist everywhere. The Jacobian determinant $\left|\frac{\partial(f,g)}{\partial(u,v)}\right|$ is zero everywhere if and only if there exists a function $h:\mathbb{R}\rightarrow \mathbb{R}$ such that $f \equiv h\circ g$.


Here is what I've tried so far. Because the partial derivatives of $f$ and $g$ are nonzero everywhere, their gradient is well-defined everywhere and hence at each point the space of vectors $\vec{n}$ such that the partial derivative of $f$ and $g$ in the direction $\vec{n}$ vanishes is one-dimensional. Because the Jacobian determinant vanishes everywhere, the gradients of $f$ and $g$ are parallel, and hence $f$ and $g$ are locally constant in the same direction at each point. This suggests that the level curves of $f$ and $g$ coincide everywhere; that is, for each point $\vec{p}\in \mathbb{R}^2$,

$$f^{-1}(f(\{\vec{p}\})) = g^{-1}(g(\{\vec{p}\}))$$

Hence if you give me the value of $q = g(\vec{p})$, I should be able to find the value $r = f(\vec{p})$ without knowing $\vec{p}$ itself—the level sets coincide. The existence of a function $h: q \mapsto r$ would establish the proof.

More formally, let $L_f$ be the collection of level sets of $f$, i.e. $L_f \equiv \{ f^{-1}(f(p)) : p \in \mathbb{R}^2\}$, and let $L_g$ be the level sets of $g$. Evidently, there are maps $C_f : L_f \rightarrow \mathbb{R}$ and $C_g : L_g \rightarrow \mathbb{R}$ sending each level set to its corresponding value in $\mathbb{R}$.

What's special is that if we claim the level sets are equal, then there is an isomorphism $s : L_f \leftrightarrow L_g$. In that case, our desired function is

$$h \equiv C_f \circ s^{-1}\circ C_g^{-1}$$

This is a correct definition because we have:

\begin{align*}
p\in \mathbb{R}^2 &\quad \text{a point in }\mathbb{R}^2\\
g(p) &\quad \text{its image under $g$}\\
C_g^{-1}(g(p)) &\quad \text{the level set in $\mathbb{R}^2$ corresponding to $g(p)$}\\
s^{-1}C_g^{-1}(g(p)) &\quad \text{that same level set viewed as a level set of $f$}\\
C_f s^{-1}C_g^{-1}(g(p)) &\quad \text{the value corresponding to that $f$ level set}\\
= f(p) & \quad\text{as demonstrated here}\\
= (h)(g(p)) &\quad\text{definition of $h$}\\
\end{align*}

My only remaining question is whether we can confirm that the level sets are all in fact equal with our given assumptions?

Best Answer

We are talking here about a theorem covered in many Advanced Calculus texts. But I have not found a proof on the web. Wikipedia treats "functional dependence" in a purely semantic way.

Assume that both $f$ and $g$ are defined in a neighborhood $U$ of $(0,0)\in{\mathbb R}^2$, and that $$g(0,0)=0 , \qquad g_{.2}(0,0)\ne0\ .$$

Claim. If $$\nabla f(x,y)\wedge \nabla g(x,y)=0\qquad\forall (x,y)\in U\tag{1}$$ then there is a $C^1$-function $t\mapsto h(t)$, defined in a neighborhood of $t=0$, such that $$f(x,y)=h\bigl(g(x,y)\bigr)\qquad\forall\ (x,y)\in U\ .$$ Proof. Consider the auxiliary function $$F(x,y,t):=g(x,y)-t\ .$$ As $F(0,0,0)=0$ and $F_y(0,0,0)=g_{.2}(0,0)\ne0$ the implicit function theorem allows to solve $F(x,y,t)=0$ in a neighborhood of $(0,0,0)$ for the variable $y$: There is a $C^1$-function $\psi$, defined in a neighborhood of $(0,0)$, such that $g(x,y)=t$ is equivalent with $y=\psi(x,t)$. It follows that $g\bigl(x,\psi(x,t)\bigr)\equiv t$, so that $$g_{.1}\bigl(x,\psi(x,t)\bigr)+g_{.2}\bigl(x,\psi(x,t)\bigr)\psi_x(x,t)\equiv0\ ,$$ or $$\psi_x(x,t)=-{g_{.1}\bigl(x,\psi(x,t)\bigr)\over g_{.2}\bigl(x,\psi(x,t)\bigr)}\ .\tag{2}$$ Now define $$\tilde h(x,t):=f\bigl(x,\psi(x,t)\bigr)\ .\tag{3}$$ I claim that the dependence of $\tilde h$ on $x$ is only apparent. To prove this compute $$\tilde h_x=f_{.1}\bigl(x,\psi(x,t)\bigr)+f_{.2}\bigl(x,\psi(x,t)\bigr)\psi_x(x,t)\equiv0\ ,$$ using $(2)$ and $(1)$. We therefore may replace $(3)$ by $$h(t):=f\bigl(x,\psi(x,t)\bigr)\ .\tag{4}$$ Consider now an arbitrary point $(x,y)\in U$, and put $g(x,y)=:t$. Then $y=\psi(x,t)$, and $(4)$ gives $$f(x,y)=f\bigl(x,\psi(x,t)\bigr)=h(t)=h\bigl(g(x,y)\bigr)\ .\qquad\qquad\square$$

Related Question