[Math] How to prove that your approximation using Newton’s Method is correct to $x$ decimal places

approximationcalculus

I just watched a video about a problem stated as: "Find where $f(x)= x^7-1000$ intersects the $x$ axis. Find solution correct to 8 decimal places."

The author uses Newton's Method, repeating the iteration process using 8 decimal place approximations for $x_n$ until the point where the last approximation is the exact same as the previous one, namely $2.68269580$.

He then concludes that the equality of the last 2 approximations means the approximation is correct to 8 decimal places.

How/Why does that work?

I don't understand how the fact that the last approximation is the same as the previous one guarantees that the digits won't change anymore up to that number of decimal places.

I'm assuming it has to do with the process, and that in this case we know (please correct me if wrong) that N's method guarantees that any two subsequent approximations are closer than any other two subsequent approximations which appear earlier in the approximation sequence.

But even if that's provably true, couldn't there be a case for example where we get in our approximation sequence: $2.——–777$, and then $2.——–999$, at which point we would conclude that our final approximation is correct up to 8 decimal places, even though an increase in just 0.00000000001 in the next approximation would be enough to change at least the digit in our 8th decimal place?

Also, from this question: Calculate cosh 1 correct to 6 decimal places., one answer states "You will only have to develop the series for a few steps until the first significant digits will not change any more."

But how do we know that's true? How do we know that once a certain digit stops changing it'll never change again?

Best Answer

For the Newton-Raphson case, if the number were rounded to $8$ decimal places between iterations, then if you get the same number twice in a row, every successive number has to be the same after that. Since Newton-Raphson is quadratically convergent you normally get about twice the significant figures on every iteration after a pretty good approximation has been achieved. Of course you can make up examples where no convergence is reached or even force the algorithm to cycle between a limit set of values.

If you are tracking the values by hand you can see when you have over half the significant figures you want and perform the last iteration with extra precision to create a better probability of getting the last digit right. But you are right in thinking that it's a big problem to try to get the last digit right every time in floating point computations. In general you probably need about twice the significant figures of your final output to get the last digit right every time.

Let's look at an example with $\cosh x$ where we will run out all the $8$-digit numbers between $0$ and $1$ and then see how many might require $15$ digits to get the $8^{\text{th}}$ digit right. Out program searches for outputs with digits $9:15$ having values between $4999999$ and $5000001$. Since this is a range of $2$ out of $1\times10^7$, we might expect about $20$ hits for a program testing $1\times10^8$ inputs, and we are not far off in that estimate.

program round
   use ISO_FORTRAN_ENV, only:wp=>REAL128,qp=>REAL128,wi=>INT64
   implicit none
   real(wp) x,y,z
   real(qp) qx, qy
   integer(wi) i
   do i = 0,10_wi**8
      x = 1.0e-8_wp*i
      y = cosh(x)
      z = y*1.0e8_wp+0.5_wp
      if(abs(z-nint(z))<1.0e-7) then
         qx = 1.0e-8_qp*i
         qy = cosh(qx)
         write(*,'(f10.8,1x,f22.20)') x,qy
      end if
   end do
end program round

Output:

0.00010000 1.00000000500000000417
0.00030000 1.00000004500000033750
0.05844226 1.00170823500000040322
0.07380594 1.00272489500000039410
0.44746231 1.10179282500000099007
0.45315675 1.10444463500000093431
0.47303029 1.11398059500000076845
0.47962980 1.11724437499999901204
0.49332468 1.12417258499999983893
0.49725888 1.12620181499999997563
0.51736259 1.13684395499999912340
0.59708506 1.18361444500000091792
0.60322956 1.18752751500000094494
0.64184055 1.21314873500000023866
0.72946242 1.27806675499999917355
0.75252442 1.29676328499999998383
0.90813720 1.44148689499999975793
0.93055850 1.46512925500000051185

The output for $0.00010000$ was expected from the Taylor series for $\cosh x$, but check out how close we got with $0.75252442$: we would have had to calculate the $17^{\text{th}}$ digit to round the $8^{\text{th}}$ digit correctly.