[Math] Condition Number related to Root finding problems

na.numerical-analysis

Suppose we want to find the root of the equation $f(x)=\phi(x) – d = 0$, where d is a real constant and $f$ is continuously differentiable function.

The problem is well posed if the inverse $\phi^{-1}$ exists, since in that case $\phi^{-1} (d) = x$.

Now most numerical analysis books I read on the topic of solving nonlinear equations (for example this grad level book) open the discussion by introducing the condition number associated with the problem, in this case it happens to be.

$$ \kappa = \frac{1}{f^{\prime}(x)} = \frac{1}{\phi^{\prime}(x)}$$

From this equation we see that the condition number $\kappa$ is large when $\phi^{\prime}(x)$ is close to zero and we are told large $\kappa$ leads to ill conditioned problems.

But so what?? I can't think of an example for which say Newton Raphson or any other root finding shceme fails when the problem is ill conditioned.

I understand the condition number measures the sensitivity of finding roots of $f$ with respect to the input datum, $d$ in this case. But what practical implications does this have?

I can think of plenty of severely ill conditioned problems. But Newton raphson has no problems with them (I know there are situations in which Newton Raphson fails, but these are due to problems intrinsic to the algorithm and are not dependent on the condition number). Does any one know of an example where the condition number for the above problem is large and a root finding scheme fails? If not then whats the point of examining the condition number.

I apologize in advance if this question is not of the level of the forum. I thought since the answer cannot be found in an elementary text, here was a good place to ask.

Best Answer

In addition to the convergence speed (and radius) mentioned in Pietro Majer's answer, there is another factor: if the problem is ill-conditioned, the solution is sensitive to perturbations.

If you make an error of magnitude $\varepsilon$ in computing the iteration or the parameters data, then the solution is perturbed by a quantity $\kappa \varepsilon$, where $\kappa$ is the condition number. In practice, you have an error of at least one part in $10^{-16}$ in most computer implementations, and often an even larger one if your function depends on measured data or approximations. So Newton's method may converge without trouble, but the solution that you get could be total garbage.

Related Question