Increasing the precision, both in terms of the order of the method and in the number of gridpoints used, usually (most of the time) leads to a more accurate estimate for the integral we are trying to compute. However this is not always the case as the following (aritficial) example shows.
$$\bf \text{Example where higher order does not imply better accuracy}$$
Let
$$f(x) = \left\{\matrix{1 & x < \frac{1}{2}\\0 & x \geq \frac{1}{2}}\right.$$
and consider the integral $I=\int_0^1f(x){\rm d}x = \frac{1}{2}$. If we use the trapezoidal rule with $n$ gridpoints then
$$I_{n} = \frac{1}{n}\sum_{i=1}^{n}\frac{f(\frac{i-1}{n})+f(\frac{i}{n})}{2} \implies I_n = \left\{\matrix{\frac{1}{2} & n~~\text{odd}\\\frac{1}{2} - \frac{1}{n} & n~~\text{even}}\right.$$
so for $n=3$ we have the exact answer which is better than any even $n$ no matter how large it is. This shows that increasing the number of gridpoints does not always improve the accuracy. With Simpson's rule we find
$$I_n = \frac{1}{3n}\sum_{i=1}^{n/2}f\left(\frac{2i-2}{n}\right)+4f\left(\frac{2i-1}{n}\right)+f\left(\frac{2i}{n}\right) \implies I_n = \left\{\matrix{\frac{1}{2} - \frac{1}{3n}&n\equiv 0\mod 4\\\frac{1}{2}&n\equiv 1\mod 4\\\frac{1}{2} + \frac{2}{3n} & n\equiv 2\mod 4\\\frac{1}{2} - \frac{5}{6n} & n\equiv 3\mod 4}\right.$$
so even if Simpson's rule has higher order we see that it does not always do better than the trapezoidal rule.
$$\bf \text{What does higher degree of precision really mean?}$$
If we have a smooth function then a standard Taylor series error analysis gives that the error in estimating the integral $\int_a^bf(x){\rm d}x$ using $n$ equally spaced points is bounded by (here for Simpsons and the trapezoidal rule)
$$\epsilon_{\rm Simpsons} = \frac{(b-a)^5}{2880n^4}\max_{\zeta\in[a,b]}|f^{(4)}(\zeta)|$$
$$\epsilon_{\rm Trapezoidal} = \frac{(b-a)^3}{12n^2}\max_{\zeta\in[a,b]}|f^{(2)}(\zeta)|$$
Note that the result we get from such an error analysis is always an upper bound (or in some cases an order of magnitude) for the error apposed to the exact value for the error. What this error analysis tell us is that if $f$ is smooth on $[a,b]$, so that the derivatives are bounded, then the error with a higher order method will tend to decrease faster as we increase the number of gridpoints and consequently we typically need fewer gridpoints to get the same accuracy with a higher order method.
The order of the method only tell us about the $\frac{1}{n^k}$ fall-off of the error and says nothing about the prefactor in front so a method that has an error of $\frac{100}{n^2}$ will tend to be worse than a method that has an error $\frac{1}{n}$ as long as $n\leq 100$.
$$\bf \text{Why do we need all these methods?}$$
In principle we don't need any other methods than the simplest one. If we can compute to arbitrary precision and have enough computation power then we can evaluate any integral with the trapezoidal rule. However in practice there are always limitations that in some cases forces us to choose a different method.
Using a low-order method requires many gridpoints to ensure good enough accuracy which can make the computation take too long time especially when the integrand is expensive to compute. Another problem that can happen even if we can afford to use as many gridpoints as we want is that truncation error (errors due to computers using a finite number of digits) can come into play so even if we use enough points the result might not be accurate.
Other methods can elevate these potential problems. Personally, whenever I need to integrate something and has to implement the method myself I always start with a low-precision method like the trapezoidal rule. This is very easy to implement, it's hard to make errors when coding it up and it's usually good enough for most purposes. If this is not fast enough or if the integrand has properties (e.g. rapid osccilations) that makes it bad I try a different method. For example I have had to compute (multidimensional) integrals where a trapezoidal rule would need more than a year to compute it to good enough accuracy, but with Monte-Carlo integration the time needed was less than a minute! It's therefore good to know different numerical integration methods in case you encounter a problem where the simplest method fails.
A couple of notes. Since the integrand is never less than $1$, you know that $x_l<170$. In fact, when $x=170$, the integrand will be about $1.087$, so the eventual answer is bounded below by $\frac{170}{1.087}=156.5$. That gives you a pretty good idea about the starting point.
Bisection is useful when you don't have the derivative available, but here the you have the derivative in the form of the integrand, so it seems to be simpler to implement Newton's method in this case unless bisection is the point of the assignment.
Also you know the integral up to a point near to the actual root at each step after the first, so you need not integrate all the way from zero to the next guess for the root, just from the previous guess. Even if you integrated from zero every time, this program would zip through pretty fast anyway.
EDIT: Newton's method
% simp.m
f = @(x) sqrt(1+(x^2/68000)^2);
err = 1;
tol = 1.0e-8;
N = 10000;
a = 0;
b = 163;
while abs(err) > tol,
h = (b-a)/N;
y = f(a);
for i = 1:N/2,
y = y+4*f((2*i-1)*h)+2*f(2*i*h);
end
y = y+4*f(b-h)+f(b);
y = h/3*y-170;
yp = f(b);
err = y/yp;
b = b-err;
end
b
Best Answer
The order of error only makes sense for the composite rule, since for the simple rule the step size $h=x_f-x_0=b-a$ is constant. Only for the composite rule do you get a variable $h=(b-a)/n$ that allows to consider the asymptotic error behavior. Thus $O(h^2)$ for the composite rule, and no asymptotic error for the simple rule.
Since the (composite) Simpson rule can be seen as Richardson extrapolation (first step of the Romberg method) of the symmetric trapezoidal rule, its error order is automatically $O(h^4)$.