Implicit function theorem intuition behind non-zero jacobian determinant

implicit-function-theoremmultivariable-calculusreal-analysis

Implicit function Theorem: In the general implicit function theorem for $m$ variables and $m$ implicit equations in the form
$$\begin{align} \mathbf F(x_1,x_2,\ldots,x_n, u_1, u_2, \ldots, u_m) = 0 \end{align}$$
where $\mathbf F=\langle F_1, F_2,…,F_m \rangle$

I have been introduced to the requirement that the square jacobian matrix for $\mathbf F(u_1,…,u_n)$ must be invertible which means the determinant should be non zero. This is apparently analogous to requiring $\frac{\partial f}{\partial y}\ne0$ for the $2D$ case $F(x,y)=0$.

Can someone please explain any intuition behind this requirement?

Best Answer

Consider the warm-up exercise, of formulating the implicit function theorem for linear functions. That is, where $\mathbf F(x,u)=0$ can be given by a matrix multiplication formula like $$ Ax+Cu=b$$ where $x$ in an $n$-vector and $u$ an $m$-vector, with matrices $A$ and $C$ and fixed vector $b$. Linear algebra tells us this equation is always uniquely solveable for $u$ given $x$ precisely when the $m\times m$ matrix $C$ is non-singular, that is, has non-vanishing determinant.

In this case $C$ is the Jacobian.

Now in the non-linear but differentiable case. A differentiable function is, intuitively, one which is well approximated by a linear one. One might expect that what holds in the exactly linear case carries over to the differentiable case. The technical content of the implicit function theorem is that this is so.

Related Question