You need some estimates on the range covered by the iteration, a minimum bound $m_1$ for the first and a maximum bound $M_2$ for the second derivative over that region.
Here we have $f(x)=x^2-28$ over the interval $[5,6]$. As
$$
N(x)=x-\frac{f(x)}{f'(x)}\implies f(N(x))=\frac12f''(\tilde x)\frac{f(x)^2}{f'(x)^2}
$$
gives contraction for
$$
\frac12M_2m_1^{-2}|f(x)|<1\iff|f(x)|<2m_1^2/M_2
$$
Using
$$
\min_{x\in[5,6]}|f'(x)|=10\text{ and }f''(x)\equiv 2
$$
this is satisfied on this interval and one gets the estimate of the quadratic convergence
$$
|f(N(x))|\le 10^{-2}|f(x)|^2\implies |f(x_n)|\le 100·\left(10^{-2}|f(x_0)|\right)^{2^n}=100·\left(0.03\right)^{2^n}
$$
As $|x-x_*|\le 0.1 ·|f(x)|$, the distance to the root satisfies
$$
|x_n-x_*|\le 10·\left(0.03\right)^{2^n}
$$
To get 5 digits after the dot you need $|x_n-x_*|\le 5·10^{-6}$ and $n=2$ gives $|x_2-x_*|\le 81·10^{-7}$ which is slightly too large, so $n=3$ will satisfy the error bound with a wide margin.
For the Newton-Raphson case, if the number were rounded to $8$ decimal places between iterations, then if you get the same number twice in a row, every successive number has to be the same after that. Since Newton-Raphson is quadratically convergent you normally get about twice the significant figures on every iteration after a pretty good approximation has been achieved. Of course you can make up examples where no convergence is reached or even force the algorithm to cycle between a limit set of values.
If you are tracking the values by hand you can see when you have over half the significant figures you want and perform the last iteration with extra precision to create a better probability of getting the last digit right. But you are right in thinking that it's a big problem to try to get the last digit right every time in floating point computations. In general you probably need about twice the significant figures of your final output to get the last digit right every time.
Let's look at an example with $\cosh x$ where we will run out all the $8$-digit numbers between $0$ and $1$ and then see how many might require $15$ digits to get the $8^{\text{th}}$ digit right. Out program searches for outputs with digits $9:15$ having values between $4999999$ and $5000001$. Since this is a range of $2$ out of $1\times10^7$, we might expect about $20$ hits for a program testing $1\times10^8$ inputs, and we are not far off in that estimate.
program round
use ISO_FORTRAN_ENV, only:wp=>REAL128,qp=>REAL128,wi=>INT64
implicit none
real(wp) x,y,z
real(qp) qx, qy
integer(wi) i
do i = 0,10_wi**8
x = 1.0e-8_wp*i
y = cosh(x)
z = y*1.0e8_wp+0.5_wp
if(abs(z-nint(z))<1.0e-7) then
qx = 1.0e-8_qp*i
qy = cosh(qx)
write(*,'(f10.8,1x,f22.20)') x,qy
end if
end do
end program round
Output:
0.00010000 1.00000000500000000417
0.00030000 1.00000004500000033750
0.05844226 1.00170823500000040322
0.07380594 1.00272489500000039410
0.44746231 1.10179282500000099007
0.45315675 1.10444463500000093431
0.47303029 1.11398059500000076845
0.47962980 1.11724437499999901204
0.49332468 1.12417258499999983893
0.49725888 1.12620181499999997563
0.51736259 1.13684395499999912340
0.59708506 1.18361444500000091792
0.60322956 1.18752751500000094494
0.64184055 1.21314873500000023866
0.72946242 1.27806675499999917355
0.75252442 1.29676328499999998383
0.90813720 1.44148689499999975793
0.93055850 1.46512925500000051185
The output for $0.00010000$ was expected from the Taylor series for $\cosh x$, but check out how close we got with $0.75252442$: we would have had to calculate the $17^{\text{th}}$ digit to round the $8^{\text{th}}$ digit correctly.
Best Answer
Your teacher's example cannot work in general as I present a counter example below. Nonetheless, I think that your teacher's approach is a reasonable way to explain the intuition behind what happens in a typical case, provided that the proper caveats are given.
I think a more reasonable stopping condition, for programming purposes, is to iterate until the value of $f$ is very small. If the first derivative is relatively large in a neighborhood of the last iterate, this might be enough to prove that there is definitively a root nearby. Of course, Christian Blatter has already provided sufficient conditions.
For a counter example, let's suppose that $$f(x) = x(x-\pi)^2 + 10^{-12}.$$ Then, the Newton's method iteration function is $$N(x) = x-f(x)/f'(x) = x-\frac{x (x-\pi)^2+10^{-10}}{3 x^2-4\pi x+\pi ^2}$$ and if we iterate $N$ 20 times starting from $x_0=3.0$, we get $$ 3., 3.07251, 3.10744, 3.12461, 3.13313, 3.13736, 3.13948, 3.14054, \ 3.14106, 3.14133, 3.14146, 3.14153, 3.14156, 3.14158, 3.14158, \ 3.14159, 3.14159, 3.14159, 3.14159, 3.14159, 3.14159 $$ Thus, your teacher's method implies there is a root at $x=3.14159$ when, of course, there is no root near here. There is, however, a root near zero to which the process eventually converges after several thousand iterates.
To place this in a broader context, let's examine the basins of attraction for this polynomial in the complex plane. There are three complex roots, one just to the left of zero and two at $\pi\pm\varepsilon i$ where $\varepsilon$ is a small positive number. In the picture below, we shade each complex initial seed depending on which of these roots Newton's method ultimately converges.
Now, it is a theorem in complex dynamics that, whenever two of these basins meet, there are points of the third basin arbitrarily near by. As a result, there is definitely a number whose decimal expansion starts with $3.14159$ that eventually converges to the root near zero under iteration of Newton's method.