Newton-Rhapson method on general strongly convex functions

convergence-divergencenewton raphsonoptimization

Let $f:\mathbb{R}^n \rightarrow \mathbb{R}$ be three times continuously differentiable function that is strongly convex. Does the Newton-Raphson method for minimization given by

$$
\mathbf{x_{k+1}}=\mathbf{x_{k}}-\left(\nabla^2 f(\mathbf{x_{k}})\right)^{-1}\nabla f(\mathbf{x_k})
$$

converge to the global minimun for any given initial guess $\mathbf{x_0}\in\mathbb{R}^n$?

There is an known way to prove that it's true in that situation. If one prove that Armijo condition holds with $\alpha=1$ for Newton-Raphson method then my statement is true.

Usually this question comes on equations and with $n=1$ as done in Proof of convergence of newton method for convex function and Newton’s method works for convex real functions. There is yet an unsolved question related to mine, i.e., Conditions under which the damped Newton method is globally convergent?.

Best Answer

The answer is no. Newton's Method for minimization does not necessarily converge for any strongly convex function and any initial guess. $\textbf{Stephen Boyd}$ and $\textbf{Lieven Vandenberghe}$ in their book called $\textbf{Convex Optimization}$ give an example of such function.

Difine $f:\mathbb{R}\rightarrow\mathbb{R}$ by $f(x)=\ln(\exp(-x)+\exp(x))$ and the initial guess $x_0=1.1$. The Newton's Method does not converge in that situation.

Related Question