Complex Analysis – Implicit Function Theorem for Several Complex Variables

complex-analysisseveral-complex-variables

This is the statement, in case you're not familiar with it.
Let $ f_j(w,z), \; j=1, \ldots, m $ be analytic functions of $ (w,z) = (w_1, \ldots, w_m,z_1,\ldots,z_n) $ in a neighborhood of $w^0,z^0$ in $\mathbb{C}^m \times \mathbb{C}^n $ and assume that $f_j(w^0,z^0)=0, \, j=1,\ldots,m $ and that $$ \det\left\{\frac{\partial f_j}{\partial w_k}\right\}^m_{j,k=1} \neq 0 $$
at $(w^0,z^0)$.
Then the equations $f_j(w,z)=0 \; j=1,\ldots,m $, have a uniquely determined analytic solution $ w(z) $ in a neighborhood of $z_0$, such that $w(z_0) = w_0$.

In the proof of this statement I find in Hormander's book he claims that in order to apply the usual implicit function theorem one must first prove that the equations $df_j = 0$ and $dz_k=0$ for $j =1, \ldots, m $ and $k = 1, \ldots, n$ imply $dw_j = 0$ for $ j = 1, \ldots, m$. I don't understand what this condition means and why it is needed.

Best Answer

The terms $df_j$, $dz_k$ and $d\omega_j$ are usually seen as one-forms. We can expand the one-form $df_j$ in terms of its parameters,

$$ df_j = \frac{\partial f_j}{\partial \omega_l} d\omega_l + \frac{\partial f_j}{\partial z_l} dz_l, $$

where summation over the index $l$ is implied. The idea of why it is necessary that

$$dz_k=0 \mbox{ for } k=1,\ldots,n \implies d\omega_j=0 \mbox{ for } j=1,\ldots,m \;, $$

for the implicit function theorem: For the $\omega_j$'s to be uniquely defined in terms of the $z_k$'s, close to the point $(\omega_0,z_0)$, then when there is no change in the $z_k$'s, i.e. $d z_k \cdot v =0$ for every $v \in \mathbb R^n$, there must also be no change in the $\omega_j$'s, i.e. $d \omega_j \cdot u =0$ for every $u \in \mathbb R^m$ - basically the $\omega_j$'s are not free to wonder when the $z_k$'s are fixed.

More concretely we can relate this condition to the standard requirement of the implicit function theorem. To do this, we use $dz_k=0$ for $k=1,\ldots,n$ to reduce the equations $df_j =0$ for $j=1,\ldots, m$ to the following

$$ df_j = \frac{\partial f_j}{\partial \omega_l} d\omega_l =0. $$

If the determinant $\det\{\partial f_j / \partial \omega_k\}_{j,k} \not = 0$, then we can invert the matrix $\partial f_j / \partial \omega_k$, so let's call $\partial \omega_i / \partial f_l$ its inverse, so that

$$ \frac{ \partial \omega_i}{\partial f_j }\frac{ \partial f_j}{\partial \omega_k} = \delta_{i,k} $$ ( summation over $j$ is implied) where $\delta_{i,k} = 1$ if $i=k$ and $0$ otherwise. Finally, we multiply this inverse with the reduced $df_j$ and sum over $j$ to get

$$ 0=\frac{\partial \omega_i} { \partial f_j} df_j= \frac{\partial \omega_i} { \partial f_j} \frac{\partial f_j}{\partial \omega_l} d\omega_l =d\omega_l \delta_{i,l} = d\omega_i \; \mbox{, for } \; i= 1, \ldots, m \,. $$ So we find $d\omega_i =0$ for every $i$. Working backwards you can see that Hormander's claim implies that $\det\{\partial f_j / \partial \omega_k\}_{j,k} \not = 0$, which is an equivalent necessary condition for the implicit function theorem.

I believe a proof using the $dz_k$'s$ \implies d\omega_j$'s leads to a nice direct proof of the implicit function theorem, and can be quite intuitive (see the 3rd paragraph of this answer). Plus it is suited for use on more abstract manifolds, where the one-forms are well defined.

Hope this helps!

Related Question