[Math] n integral form of Newton’s method

ordinary differential equationspartial differential equationsreal-analysissoft-question

Warning : This seems like a silly sort of question, not the kind I'd ask out loud.

The contraction mapping theorem is a basic tool for proving existence of, and finding solutions to, equations. Given an algebraic (read: not differential) equation $f(x)=0$ where $f : \mathbb{R} \rightarrow \mathbb{R}$ is sufficiently smooth, often it is possible prove that the mapping
\begin{equation}
\Phi_f(x) = x – \frac{f(x)}{f'(x)}
\end{equation}
is a contraction on some complete metric space by restricting $x$ to an interval. This yields existence of a solution to the original equation. This is Newton's method, which has both theoretical and practical significance.

Consider the ordinary differential equation $\dot{\mathbf{x}}=\mathbf{v}(\mathbf{x},t)$. The Picard mapping
\begin{equation*}
\Psi(\phi)(t) = \mathbf{x}_0 + \int_0^t \mathbf{v}(\phi(\tau),\tau) \; d\tau
\end{equation*}
is a contraction on a function space, under suitable conditions on $\mathbf{v}$. This yields an existence result for the given ODE with data $\mathbf{x}_0$. A very similar map appears in the study of certain nonlinear partial differential equations (e.g. Duhamel's principle applied to semilinear equations). Contrasted with the numerical solution of an algebraic equation, the contraction in these cases isn't of much practical use.

Clearly the derivative is useful for proving existence of algebraic equations, and similarly for the integral and differential equations. In line with the title, have I missed a theoretical application of the integral to solve algebraic equations or the derivative to solve differential equations? If not, is there a moral reason why we shouldn't expect to find such applications?

Best Answer

This is an answer based on my comments above. There is indeed an integral version of Newton's method for algebraic equation. Say you have an equation: $$ f(x) = 0, $$ then we can set up an initial value problem: $$ \begin{cases} x'(t) = \frac{\alpha}{1+t^{\beta}}f(x), \\[3pt] x(0) = x_0. \end{cases} $$ As you can see, the equilibrium solution for above ODE is when $f(x) = 0$, i.e., $$ \lim_{t\to \infty} x(t) = r, $$ where $r$ is one real root of $f(x)$.

In that ODE, $\alpha$ can be positive or negative depends on $f(x_0)$'s sign, and we would like to the solution decays to the equilibrium solution pretty fast, for e.g. choosing $\beta = 1$.

This method has two advantages:

  • It circumvents the difficulty of choosing an initial guess for the traditional Newton's method.

  • The derivative of $f(x)$ may be zero at somewhere.


Let's use that infamous $x^3 -2x +2 =0$ for example, if your initial guess is $1$ or $0$ then you will end up with oscillating forever between $1$ and $0$ (Please see the wiki's entry for Newton's method).

newton

Using the ODE approach, set up the following initial value problem with initial guess $0$, and $\alpha = 1, \beta = -1$. $$ \begin{cases} x'(t) = -\frac{t}{1+t }(x^3 -2x +2), \\[3pt] x(0) = 0. \end{cases} $$ Choosing time step $h= 0.05$, and we can see the solution $x(t)$ converges pretty fast to the equilibrium solution $x_e\approx -1.7692923542386314152$, which is the root for $x^3 -2x +2 =0$:

newton


The philosophy behind this is that: No matter integration or differentiation, we just need that contraction in the spaces where the solution lies, and this contraction must be "good", so that we can get a good approximation after a few iterations.

Related Question