Numerical Methods – Advantage in Fixing the Jacobian for Several Steps (Modified Newton’s Method)

computational complexitynewton raphsonnumerical linear algebranumerical methods

Consider a modification to Newton's method for which we fix the Jacobian for $k$ steps. ($x \in \mathbb{R}^m$)
$$x_{rk+j+1} = x_{rk+j} – J(x_{rk})^{-1}f(x_{rk+j})\\
j = 0,1,\dots,k-1$$

I have read in my text (Atkinson's Numerical Analysis) that this system is more efficient (though slower in convergence) than the original Newton's method:

the linear system of $m$ unknowns would require $\frac{2}{3}m^3$
calculations for $j = 0$, but for $j = 1, \dots, k-1$, would only
require $2m^2$ operations.

At first I assumed that this is because we only need to solve the inverse of the Jacobian once and then simply multiply for the rest of the steps, but then my instructor told me that basically, we never find the inverse and only solve for the unknowns. I think this means that there is another way of looking at the problem. Is there another way to think about how this reduces computation?

Best Answer

Newton's method is frequently stated as $$ x_{k+1} = x_{k} - Df(x_k)^{-1} f(x_k)$$ where $Df(x) \in \mathbb{R}^{n \times n}$ is the Jacobian of $f$ at the point $x$. In practice, we use the equivalent equation $$ x_{k+1} = x_{k} - z_k,$$ where $z_k$ is the solution of the linear system $$ Df(x_k) z_k = f(x_k).$$ If we know the LU factorization of $Df(x_k)$, then we can compute $z_k$ using $O(n^2)$ arithmetic operations. The cost of computing an LU factorization from scratch is $O(n^3)$ which is why recycling is even considered.


We only compute the inverse if it is explicitly required by the application. Any rounding errors in the construction of the inverse can be magnified if we use the inverse to compute the solution of a linear system.