Old question, but since the right answer hasn't yet been posted...
The real reason for the trapezoidal rule having smaller error than Simpson's rule is that it performs spectacularly when integrating regular periodic functions over a full period. There are $2$ ways to explain this phenomenon: First we can start with
$$\begin{align}\int_0^1f(x)dx&=\left.\left(x-\frac12\right)f(x)\right|_0^1-\int_0^1\left(x-\frac12\right)f^{\prime}(x)dx\\
&=\left.B_1(x)f(x)\right|_0^1-\int_0^1B_1(x)f^{\prime}(x)dx\\
&=\frac12\left(f(0)+f(1)\right)-\left.\frac12B_2(x)f^{\prime}(x)\right|_0^1+\frac12\int_0^1B_2(x)f^{\prime\prime}(x)dx\\
&=\frac12\left(f(0)+f(1)\right)-\frac12B_2\left(f^{\prime}(1)-f^{\prime}(0)\right)+\frac12\int_0^1B_2(x)f^{\prime\prime}(x)dx\\
&=\frac12\left(f(0)+f(1)\right)-\sum_{n=2}^{2N}\frac{B_n}{n!}\left(f^{(n-1)}(1)-f^{(n-1)}(0)\right)+\int_0^1\frac{B_{2N}(x)}{(2n)!}f^{(2N)}(x)dx\end{align}$$
Where $B_n(x)$ is the $n^{\text{th}}$ Bernoulli polynomial and $B_n=B_n(1)$ is the $n^{\text{th}}$ Bernoulli number. Since $B_{2n+1}=0$ for $n>0$, we also have
$$\begin{align}\int_0^1f(x)dx=\frac12\left(f(0)+f(1)\right)-\sum_{n=1}^{N}\frac{B_{2n}}{(2n)!}\left(f^{(2n-1)}(1)-f^{(2n-1)}(0)\right)+\int_0^1\frac{B_{2N}(x)}{(2n)!}f^{(2N)}(x)dx\end{align}$$
That leads to the trapezoidal rule with correction terms
$$\begin{align}\int_a^bf(x)dx&=\sum_{k=1}^m\int_{a+(k-1)h}^{a+kh}f(x)dx\\
&=\frac h2\left(f(a)+f(b)\right)+h\sum_{k=1}^{m-1}f(a+kh)-\sum_{n=1}^N\frac{h^{2n}B_{2n}}{(2n)!}\left(f^{2n-1}(b)-f^{2n-1}(a)\right)\\
&\quad+\int_a^b\frac{h^{2N}B_{2N}(\{x\})}{(2N)!}f^{2N}(x)dx\end{align}$$
Since we are assuming $f(x)$ has period $b-a$ and all of its derivatives are continuous, the correction terms all add up to zero and we are left with
$$\begin{align}\int_a^bf(x)dx&=\frac h2\left(f(a)+f(b)\right)+h\sum_{k=1}^{m-1}f(a+kh)+\int_a^b\frac{h^{2N}B_{2N}(\{x\})}{(2N)!}f^{2N}(x)dx\end{align}$$
So the error is $O(h^{2N})$ for some possibly big $N$, the only limitation being that the product of the Bernoulli polynomial and the derivative starts to grow faster than $h^{-2N}$.
The other way to look at it is to consider that $f(x)$, being periodic and regular, can be represented by a Fourier series
$$f(x)=\frac{a_0}2+\sum_{n=1}^{\infty}\left(a_n\cos\frac{2\pi n(x-a)}{b-a}+b_n\sin\frac{2\pi n(x-a)}{b-a}\right)$$
Since it's periodic, $f(a)=f(b)$ and the trapezoidal rule computes
$$\int_a^bf(x)dx\approx h\sum_{k=0}^{m-1}f(a+kh)$$
Since $\sin\alpha\left(k+\frac12\right)-\sin\alpha\left(k-\frac12\right)=2\cos\alpha k\sin\alpha/2$, if $m$ is not a divisor of $n$,
$$\begin{align}\sum_{k=0}^{m-1}\cos\frac{2\pi nkh}{b-a}&=\sum_{k=0}^{m-1}\cos\frac{2\pi nk}m=\sum_{k=0}^{m-1}\frac{\sin\frac{2\pi n}m(k+1/2)-\sin\frac{2\pi n}m(k-1/2)}{2\sin\frac{\pi n}m}\\
&=\frac{\sin\frac{2\pi n}m(m-1/2)-\sin\frac{2\pi n}m(-1/2)}{2\sin\frac{\pi n}m}=0\end{align}$$
If $m$ is a divisor of $n$, then
$$\sum_{k=0}^{m-1}\cos\frac{2\pi nkh}{b-a}=\sum_{k=0}^{m-1}\cos\frac{2\pi nk}m=m$$
Since $\cos\alpha\left(k+\frac12\right)-\cos\alpha\left(k-\frac12\right)=-2\sin\alpha k\sin\alpha/2$, if $m$ is not a divisor of $n$,
$$\begin{align}\sum_{k=0}^{m-1}\sin\frac{2\pi nkh}{b-a}&=\sum_{k=0}^{m-1}\sin\frac{2\pi nk}m=-\sum_{k=0}^{m-1}\frac{\cos\frac{2\pi n}m(k+1/2)-\cos\frac{2\pi n}m(k-1/2)}{2\sin\frac{\pi n}m}\\
&=-\frac{\cos\frac{2\pi n}m(m-1/2)-\cos\frac{2\pi n}m(-1/2)}{2\sin\frac{\pi n}m}=0\end{align}$$
And even if $m$ is a divisor of $n$n
$$\sum_{k=0}^{m-1}\sin\frac{2\pi nkh}{b-a}=\sum_{k=0}^{m-1}\sin\frac{2\pi nk}m=0$$
So the trapezoidal rule produces
$$\int_a^bf(x)dx\approx(b-a)\left(\frac{a_0}2+\sum_{n=1}^{\infty}a_{mn}\right)$$
Since the exact answer is $\int_a^bf(x)dx=(b-a)a_0/2$ we see that the effect of the trapezoidal rule in this case is to approximate the function $f(x)$ by its $2n-1$ 'lowest energy' eigenfunctions and integrate the approximation. This is pretty much what Gaussian integration does so it's amusing to compare the two for periodic and nonperiodic functions. The program that produces my data looks like this:
module Gmod
use ISO_FORTRAN_ENV, only: wp=>REAL64
implicit none
real(wp), parameter :: pi = 4*atan(1.0_wp)
contains
subroutine eval_legendre(n,x,p,q)
integer, intent(in) :: n
real(wp), intent(in) :: x
real(wp), intent(out) :: p, q
integer m
real(wp) r
if(n == 0) then
p = 1
q = 0
else
p = x
q = 1
do m = 2, n-1, 2
q = ((2*m-1)*x*p-(m-1)*q)/m
p = ((2*m+1)*x*q-m*p)/(m+1)
end do
if(m == n) then
r = ((2*m-1)*x*p-(m-1)*q)/m
q = p
p = r
end if
end if
end subroutine eval_legendre
subroutine formula(n,x,w)
integer, intent(in) :: n
real(wp), intent(out) :: x(n), w(n)
integer m
real(wp) omega, err
real(wp) p, q
real(wp), parameter :: tol = epsilon(1.0_wp)**(2.0_wp/3)
omega = sqrt(real((n+2)*(n+1),wp))
do m = n/2+1,n
if(2*m < n+7) then
x(m) = (2*m-1-n)*pi/(2*omega)
else
x(m) = 3*x(m-1)-3*x(m-2)+x(m-3)
end if
do
call eval_legendre(n,x(m),p,q)
q = n*(x(m)*p-q)/(x(m)**2-1)
err = p/q
x(m) = x(m)-err
if(abs(err) < tol) exit
end do
call eval_legendre(n,x(m),p,q)
p = n*(x(m)*p-q)/(x(m)**2-1)
w(m) = 2/(n*p*q)
x(n+1-m) = 0-x(m)
w(n+1-m) = w(m)
end do
end subroutine formula
end module Gmod
module Fmod
use Gmod
implicit none
real(wp) e
type T
real(wp) a
real(wp) b
procedure(f), nopass, pointer :: fun
end type T
contains
function f(x)
real(wp) f
real(wp), intent(in) :: x
f = 1/(1+e*cos(x))
end function f
function g(x)
real(wp) g
real(wp), intent(in) :: x
g = 1/(1+x**2)
end function g
function h1(x)
real(wp) h1
real(wp), intent(in) :: x
h1 = exp(x)
end function h1
end module Fmod
program trapz
use Gmod
use Fmod
implicit none
integer n
real(wp), allocatable :: x(:), w(:)
integer, parameter :: ntest = 5
real(wp) trap(0:ntest),simp(ntest),romb(ntest),gauss(ntest)
real(wp) a, b, h
integer m, i, j
type(T) params(3)
params = [T(0,2*pi,f),T(0,1,g),T(0,1,h1)]
e = 0.5_wp
write(*,*) 2*pi/sqrt(1-e**2)
write(*,*) pi/4
write(*,*) exp(1.0_wp)-1
do j = 1, size(params)
a = params(j)%a
b = params(j)%b
trap(0) = (b-a)/2*(params(j)%fun(a)+params(j)%fun(b))
do m = 1, ntest
h = (b-a)/2**m
trap(m) = trap(m-1)/2+h*sum([(params(j)%fun(a+h*(2*i-1)),i=1,2**(m-1))])
simp(m) = (4*trap(m)-trap(m-1))/3
n = 2**m+1
allocate(x(n),w(n))
call formula(n,x,w)
gauss(m) = (b-a)/2*sum(w*[(params(j)%fun((b+a)/2+(b-a)/2*x(i)),i=1,n)])
deallocate(x,w)
end do
romb = simp
do m = 2, ntest
romb(m:ntest) = (2**(2*m)*romb(m:ntest)-romb(m-1:ntest-1))/(2**(2*m)-1)
end do
do m = 1, ntest
write(*,*) trap(m),simp(m),romb(m),gauss(m)
end do
end do
end program trapz
For the periodic function
$$\int_0^{2\pi}\frac{dx}{1+e\cos x}=\frac{2\pi}{\sqrt{1-e^2}}=7.2551974569368713$$
For $e=1/2$ we get
$$\begin{array}{c|cccc}N&\text{Trapezoidal}&\text{Simpson}&\text{Romberg}&\text{Gauss}\\
\hline
3&8.3775804095727811&9.7738438111682449&9.7738438111682449&8.1148990311586466\\
5&7.3303828583761836&6.9813170079773172&6.7951485544312549&7.4176821579266701\\
9&7.2555830332907121&7.2306497582622216&7.2544485033158699&7.2613981883302499\\
17&7.2551974671820254&7.2550689451457968&7.2568558971905723&7.2552065886284041\\
33&7.2551974569368731&7.2551974535218227&7.2551741878182652&7.2551974569565632
\end{array}$$
As can be seen the trapezoidal rule is even outperforming Gaussian quadrature, producing an almost exact result with $33$ data points. Simpson's rule is not as good because it averages in a trapezoidal rule approximation that uses fewer data points. Romberg's rule, usually pretty reliable, is even worse than Simpson, and for the same reason. How about
$$\int_0^1\frac{dx}{1+x^2}=\frac{\pi}4=0.78539816339744828$$
$$\begin{array}{c|cccc}N&\text{Trapezoidal}&\text{Simpson}&\text{Romberg}&\text{Gauss}\\
\hline
3&0.77500000000000002&0.78333333333333333&0.78333333333333333&0.78526703499079187\\
5&0.78279411764705875&0.78539215686274499&0.78552941176470581&0.78539815997118823\\
9&0.78474712362277221&0.78539812561467670&0.78539644594046842&0.78539816339706148\\
17&0.78523540301034722&0.78539816280620556&0.78539816631942927&0.78539816339744861\\
33&0.78535747329374361&0.78539816338820911&0.78539816340956103&0.78539816339744795
\end{array}$$
This is a pretty hateful integral because its derivatives grow pretty fast in the interval of integration. Even here Romberg isn't really any better that Simpson and now the trapezoidal rule is lagging far behind but Gaussian quadrature is still doing well. Finally an easy one:
$$\int_0^1e^xdx=e-1=1.7182818284590451$$
$$\begin{array}{c|cccc}N&\text{Trapezoidal}&\text{Simpson}&\text{Romberg}&\text{Gauss}\\
\hline
3&1.7539310924648253&1.7188611518765928&1.7188611518765928&1.7182810043725216\\
5&1.7272219045575166&1.7183188419217472&1.7182826879247577&1.7182818284583916\\
9&1.7205185921643018&1.7182841546998968&1.7182818287945305&1.7182818284590466\\
17&1.7188411285799945&1.7182819740518920&1.7182818284590782&1.7182818284590460\\
33&1.7184216603163276&1.7182818375617721&1.7182818284590460&1.7182818284590444
\end{array}$$
This is the order we expect: Gauss is pretty much exact at $9$ data points, Romberg at $33$, with Simpson's rule and the trapezoidal rule bringing up the rear because they aren't being served the grapefruit of a periodic integrand.
Hope the longish post isn't considered off-topic. Is the plague over yet?
Best Answer
For the second part IMO you misunderstood the task. I believe you were asked to compute the exact value by solving the integral, $$ \int_{-1}^2\frac{x}{1+x^4}dx=\frac12\int_{1}^2\frac{d(x^2)}{1+(x^2)^2}dx=\frac12(\arctan(4)-\arctan(1))≈0.270\,209\,750\,135\,292 $$
Using that value as reference you get the errors of the methods as
where due to the accumulation of floating point noise there is an additional $\mu/h$ resp. $\mu N$ error term and the error is minimal where the method error $\sim N^{-2}$ is in balance with it, that is, $N\sim \sqrt[3]{1/\mu}$, which is with $\mu=10^{-16}$ at a scale of $10^5$.