[Math] What can be said about the convergence rate of the bisection method

bisectionconvergence-divergencenumerical methods

Bisection method is commonly said to be linearly convergent, but as far as I can tell, it does not neatly fit into the definition. e.g. a method is convergent with order $\alpha$ if

$|x_{n+1}-x^*|\le\lambda|x_n-x^*|^\alpha, \hspace{0.1in} \forall n > N$

Alas, I believe bisection has the pesky tendency that the error will increase at any arbitrary step. I'm not sure how to write it succinctly, but I believe that that approximation will keep stepping towards the solution in steps of $[b_n – a_n]/2$ where $[a_n,b_n]$ is the current interval. There will be periods where the error decreases with every step, until it overshoots the solution and increases at that step. Subsequently it will decrease until it overshoots it again. Of course, each amount it overshoots will be smaller and smaller each time, but still there will be arbitrary large steps $n$ where the error actually increases.

Is this incorrect? If so, can you provide a proof? If this is correct, than what can we say about the convergence? I find it unfortunate because it certainly seems like bisection is the simplest way to show a clear convergence, that at least at face value seems to be linear, but seems to not quite fit into the definition.

****edit*****

I am content with Ian's answer, that it converges linearly if we consider a different definition of order of convergence. However, my question now would be, is there an example of a method that satisfies the definition:

$lim_{n\rightarrow \infty} \frac{|x_{n+1}-x^*|}{|x_{n}-x^*|^\alpha} > 0?$

Best Answer

The more practical version of the definition of linear convergence is that $|e_n| \leq \epsilon_n$ where $\epsilon_n$ is a sequence satisfying

$$\lim_{n \to \infty} \frac{\epsilon_{n+1}}{\epsilon_n} = \mu<1.$$

The "simple" version is easier to think about, but asserts some monotonicity of convergence that you just usually don't have. This extension is discussed at https://en.wikipedia.org/wiki/Rate_of_convergence#Extended_definition

In bisection this $\epsilon_n$ can be quite explicitly furnished as $(b-a)2^{-n}$. The cases when $|e_n|$ is a lot less than your $\epsilon_n$ should be thought of as being more like "happy coincidences". For example, if you use bisection to solve $x^2-2=0$ with the initial bracket $[1,2]$, you have one such "happy coincidence" at the first step, when the error of about $0.086$ is compared with the bound of $0.5$.