Finding an explicit function, after applying the implicit function theorem (or Dini’s theorem)

implicit functionimplicit-function-theoremmultivariable-calculus

while trying to understand the importance of this powerful theorem, a doubt arose.

Dini does not provide us with the formula for the implicit function we're looking for, but only tells us its existence, if -let's say- a two-variable function $F(x,y)$ satisfies the assumptions of the theorem.

However, Dini also guarantees the possibility to compute the derivative of the implicit function $f(x)$ in a neighborhood of the initial point of coordinates $(x_0,y_0)$. And, in fact, $f'(x) = -\frac{F_x}{F_y}$.

Now, I ask if

(a) this is true for any point satisfying the equation F(x,y)=0

(b) we can compute the primitive, or the explicit function, by integrating $f'(x)$, which has a family of antiderivatives, the indefinite integral. We could then find the value of our generic constant $c$ by checking the initial conditions, since we know that $F(x_0,y_0)=0$

Best Answer

Let's review the two-variable Implicit Function Theorem:

Let $U\subseteq\mathbb{R}^2$ be open and $F\colon U\rightarrow\mathbb{R}$ be a continuously differentiable function. If $(x_0,y_0)\in U$ is such that $F(x_0,y_0)=0$ and $\partial_yF(x_0,y_0)\neq0$, then there exist $\varepsilon,\delta>0$ and a unique mapping $g\colon(x_0-\varepsilon,x_0+\varepsilon)\rightarrow(y_0-\delta,y_ß+\delta)$ such that $F(x,g(x))=0$ for all $x\in(x_0-\varepsilon,x_0+\varepsilon)$. Furthermore, $g$ is continuously differentiable.

The strength of this theorem is manyfold: it asserts the local existence of an implicit function, the uniqueness thereof and that it is continuously differentiable. The theorem often comes with an explicit formula for the derivative, but that part is trivial once you have differentiability. We have $$F(x,g(x))=0.\qquad\forall x\in(x_0-\varepsilon,x_0+\varepsilon)$$ Differentiating these two functions with the chain rule yields $$0=\begin{pmatrix}\partial_xF(x,g(x))\\\partial_yF(x,g(x))\end{pmatrix}\begin{pmatrix}1&g^{\prime}(x)\end{pmatrix}=\partial_xF(x,g(x))+g^{\prime}(x)\partial_yF(x,g(x)).\ \forall x\in(x_0-\varepsilon,x_0+\varepsilon)$$ Equivalently, $$g^{\prime}(x)=-\frac{\partial_xF(x,g(x))}{\partial_yF(x,g(x))},\ \forall x\in(x_0-\varepsilon,x_0+\varepsilon)$$ Note that this is well-defined, because $\partial_yF(x,g(x))\neq0$ in a sufficiently small neighborhood of $x_0$, since $F$ is continuously differentiable. This answers (a) affirmatively.

However, note that, generally, the derivative $g^{\prime}(x)$ will depend on $g(x)$, as seen in the above formula. You know that $g(x_0)=y_0$, so you can explicitly calculate $g^{\prime}(x_0)$, but you can generally not calculate $g^{\prime}(x)$ any better than you can calculate $g(x)$ itself, which you usually cannot do, as otherwise you wouldn't need to apply the Implicit Function Theorem (there is a reason why these functions are called implicit after all). Since you don't have $g^{\prime}(x)$ explicitly, you also cannot easily find a primitive. Reconstructing $g$ from the given equation is akin to solving a differential equation, which, in general, can be a very hard problem. This answers (b) essentially in the negative.

For an example, take $F\colon\mathbb{R}^2\rightarrow\mathbb{R}.\,(x,y)\mapsto x+y+y^5$. For any fixed $x\in\mathbb{R}$, this increases monotonically in $y$ from $-\infty$ to $+\infty$, so there is a unique $y=g(x)$, such that $x+g(x)+g(x)^5=0$. According to Wolfram, you can only express this $g$ as an infinite series. However, you can apply the Implicit Function Theorem and deduce that $g^{\prime}(x)=-(1+5g(x)^4)^{-1}$ for all $x\in\mathbb{R}$. This tells you that $g$ is monotonically decreasing, which is useful, although you could figure this out from just looking at $F$ as well. However, reconstructing $g$ from this seems hard. I wouldn't know how to do it, you can give it a try.

Lastly, a more advanced version of the Implicit Function Theorem states that if $F$ is analytic, then $g$ will be too. In that case, you can theoretically calculate all the derivatives of $g$ at $x_0$ from the derivatives of $F$ at $(x_0,y_0)$ (by repeatedly differentiating the implicit equation) and thus expand $g$ into a Taylor series around $x_0$. This can, of course, be difficult and may not prove insightful. If you want a reference on this or the Implicit Function Theorem generally, you can check out "The Implicit Function Theorem - History, Theory, and Applications" from Krantz and Parks.

Related Question