[Math] Gauss-Legendre vs Gauss-Chebyshev quadratures (and Clenshaw-Curtis)

numerical methodsquadrature

There's a point I wish to elucidate about integrating polynomials. It is known that functions are better approximated by polynomials with non-uniformly spaced nodes, which is why Gauss quadratures work better than Newton-Cotes formulas. Apparently the Chebyshev polynomials are those which minimize the Runge phenomenon, so this should mean that the Gauss-Chebyshev rule should be more accurate? Would this increased accuracy be significant? And so, if I want to use an adaptive scheme, I'd rather use Clenshaw-Curtis than Gauss-Chebyshev?

Best Answer

Clenshaw-Curtis quadrature doesn't actually use the Chebyshev polynomials of the first kind $T_n(x)$ to perform the integration, rather it uses them as a basis set to derive an interpolatory quadrature formula. Let's review briefly how it works. We have a function $$f(x)=\sum_{k=0}^{\infty}b_kT_k(x)\approx\sum_{k=0}^{n+1}b_kT_k(x)\approx\sum_{k=0}^{n+1}a_kT_k(x)$$ We are assuming that $f(x)$ is well-behaved enough that it can be expanded in a series of Chebyshev polynomials like this and we would like to know the truncated series, but we will end up with an approximation to the truncated series. To pick out the coefficients $b_k$ we could use the orthogonality of the Chebyshev polynomials: $$\begin{align}\int_{-1}^1\frac{T_j(x)T_k(x)}{\sqrt{1-x^2}}dx&=\int_{-1}^1\frac{\cos\left(j\cos^{-1}x\right)\cos\left(k\cos^{-1}x\right)}{\sqrt{1-x^2}}dx\\ &=\int_{\pi}^0\frac{\cos j\theta\cos k\theta}{\sin\theta}(-\sin\theta\,d\theta)\\ &=\frac12\int_0^{\pi}\left[\cos(j+k)\theta+\cos(j-k)\theta\right]d\theta\\ &=\left\{\begin{array}{rl}\pi&\text{if}\quad j=k=0\\ \frac{\pi}2&\text{if}\quad j=k\ne0\\ 0&\text{if}\quad j\ne k\end{array}\right.\\ &=\frac{\pi}2\delta_{jk}(1+\delta_{k0})\end{align}$$ So that $$\begin{align}\int_{-1}^1\frac{f(x)T_k(x)}{\sqrt{1-x^2}}dx&=\sum_{j=0}^{\infty}b_j\int_{-1}^1\frac{T_j(x)T_k(x)}{\sqrt{1-x^2}}dx\\ &=\frac{\pi}2\sum_{j=0}^{\infty}b_j\delta_{kj}(1+\delta_k0)=\frac{\pi}2b_k(1+\delta_{k0})\end{align}$$ But we can't do those integrals because if we could we probably wouldn't be talking about a numerical quadrature formula in the first place, so we use $n+2$-point Lobatto-Chebyshev quadrature $$\int_{-1}^1\frac{g(x)}{\sqrt{1-x^2}}dx\approx\frac{\pi}{n+1}\left\{\frac12g(1)+\sum_{j=1}^ng\left(\cos\frac{j\pi}{n+1}\right)+\frac12g(-1)\right\}$$ Where the sample points $\cos\frac{j\pi}{n+1}$ are the zeros of the Chebyshev polynomial of the second kind $$U_n(x)=\frac{\sin\left((n+1)\cos^{-1}x\right)}{\sqrt{1-x^2}}$$ This formula is exact for $g(x)$ a polynomial of degree at most $2n+1$. This means that if we construct the polynomial of degree at most $n+1$ that interpolates $f(x)$ at the $n+2$ zeros of $(1-x^2)U_n(x)$ and we want to find its expansion in Chebyshev polynomials $$p_{n+1}(x)=\sum_{k=0}^{n+1}a_kT_k(x)$$ We can find $$\begin{align}a_0&=\frac1{n+1}\left\{\frac12f(1)+\sum_{j=1}^nf\left(\cos\frac{j\pi}{n+1}\right)+\frac12f(-1)\right\}\\ a_k&=\frac2{n+1}\left\{\frac12f(1)+\sum_{j=1}^nf\left(\cos\frac{j\pi}{n+1}\right)\cos\frac{jk\pi}{n+1}+(-1)^k\frac12f(-1)\right\},1\le k\le n\\ a_{n+1}&=\frac1{n+1}\left\{\frac12f(1)+\sum_{j=1}^nf\left(\cos\frac{j\pi}{n+1}\right)(-1)^j+(-1)^{n+1}\frac12f(-1)\right\}\end{align}$$ All but the last equation above were justified by the orthogonality of the Chebyshev polynomials of the first kind and the Lobatto-Chebyshev quadrature formula given that $T_k\left(\cos\frac{j\pi}{n+1}\right)=\cos\frac{jk\pi}{n+1}$ but the latter formula is not exact for polynomials of degree $2n+2$ so we have to compute directly $$\begin{align}&\frac{\pi}{n+1}\left\{\frac12\left(T_{n+1}(1)\right)^2+\sum_{j=1}^n\left(T_{n+1}\left(\cos\frac{j\pi}{n+1}\right)\right)^2+\frac12\left(T_{n+1}(-1)\right)^2\right\}\\ &\quad=\frac{\pi}{n+1}\left\{\frac12+\sum_{j=1}^n\cos^2j\pi+\frac12\right\}=\pi\end{align}$$ With this expansion in hand we can compute $$\begin{align}\int_{-1}^1T_k(x)dx&=\int_{-1}^1\cos\left(k\cos^{-1}x\right)dx=\int_{\pi}^0\cos k\theta(-\sin\theta\,d\theta)\\ &=\frac12\int_0^{\pi}\left[\sin(k+1)\theta-\sin(k-1)\theta\right]d\theta\\ &=\frac12\left[-\frac{\cos(k+1)\theta}{k+1}+\frac{\cos(k-1)\theta}{k-1}\right]_0^{\pi}\\ &=\frac12\left(1+(-1)^k\right)\frac{(-2)}{k^2-1}\end{align}$$ So now we have the formula $$\begin{align}\int_{-1}^1f(x)dx&\approx\sum_{k=0}^{n+1}a_k\int_{-1}^1T_k(x)dx\\ &=\frac1{n+1}\left\{\frac12(2)+\sum_{k=1}^n\frac12\left(1+(-1)^k\right)\frac{(-2)}{k^2-1}\right.\\ &\quad\quad\left.+\frac14\left(1+(-1)^{n+1}\right)\frac{(-2)}{n^2+2n}\right\}f(1)\\ &\quad+\sum_{j=1}^n\frac2{n+1}\left\{\frac12(2)+\sum_{k=1}^n\frac12\left(1+(-1)^k\right)\frac{(-2)}{k^2-1}\cos\frac{jk\pi}{n+1}\right.\\ &\quad\quad\left.+(-1)^j\frac14\left(1+(-1)^{n+1}\right)\frac{(-2)}{n^2+2n}\right\}f\left(\cos\frac{j\pi}{n+1}\right)\\ &\quad+\frac1{n+1}\left\{\frac12(2)+\sum_{k=1}^n\frac12\left(1+(-1)^k\right)\frac{(-2)}{k^2-1}(-1)^k\right.\\ &\quad\quad\left.+\frac14\left(1+(-1)^{n+1}\right)\frac{(-2)}{n^2+2n}\right\}f(-1)\end{align}$$ The above formula is a little problematic because as we wrote it we are dividing by zero if $k=1$ but the integrals for odd $k$ work out to zero anyway so we can write $$\begin{align}\int_{-1}^1f(x)dx&\approx\frac1{n+1}\left\{\frac12(2)+\sum_{k=1}^{\left\lfloor\frac n2\right\rfloor}\frac{(-2)}{4k^2-1}+\frac14\left(1+(-1)^{n+1}\right)\frac{(-2)}{n^2+2n}\right\}f(1)\\ &\quad+\sum_{j=1}^n\frac2{n+1}\left\{\frac12(2)+\sum_{k=1}^{\left\lfloor\frac n2\right\rfloor}\frac{(-2)}{4k^2-1}\cos\frac{jk\pi}{n+1}\right.\\ &\quad\quad\left.+(-1)^j\frac14\left(1+(-1)^{n+1}\right)\frac{(-2)}{n^2+2n}\right\}f\left(\cos\frac{j\pi}{n+1}\right)\\ &\quad+\frac1{n+1}\left\{\frac12(2)+\sum_{k=1}^{\left\lfloor\frac n2\right\rfloor}\frac{(-2)}{4k^2-1}(-1)^k+\frac14\left(1+(-1)^{n+1}\right)\frac{(-2)}{n^2+2n}\right\}f(-1)\end{align}$$ What I would like to draw the reader's attention to in the above is the fact that although Chebyshev polynomials of the first kind were used extensively in the derivation the actual sample points were the zeros of $(1-x^2)U_n(x)$ which doesn't have the equal-ripple property but rather lies within an envelope of the unit circle. Had we chosen equally-spaced sample points or the zeros $(1-x^2P_n^{\prime}(x)$ where $P_n(x)$ is the Legendre polynomial, we would have gotten the Newton-Cotes or Lobatto-Legendre quadrature formulas because effectively the function would have been interpolated at those points and then the interpolating polynomial would have been integrated. The derivation would have been complicated because the Chebyshev polynomials aren't orthogonal with respect to the weight function $w(x)=1$ so linear algebra would have been required to find the coefficients $a_k$.

The reason the zeros of $(1-x^2)U_n(x)$ were chosen rather than $T_n(x)$ is that when $n+1$ is doubled the zeros of $(1-x^2)U_n(x)$ are a subset of the zeros of $(1-x^2)U_{2n+1}(x)$ so the functional values at those points can be reused. If the zeros of $T_n(x)$ were used instead it would have been necessary to go all the way to $T_{3n}(x)$ to get to a set of zeros that contained the previous zeros.

Gauss-Legendre quadrature has the same issue that higher order formulas can't reuse any of the functional values used to compute lower order formulas. Gauss-Kronrod formulas do reuse points but at the cost of less accuracy of the higher order formulas than a Gauss-Legendre formula of the same order and a much bloodier derivation.

Gauss-Chebyshev formulas use the weight function $w(x)=\frac1{\sqrt{1-x^2}}$ so they are great if your integrand has square-root singularities at its endpoints, and they can even be pretty good if it doesn't, much like Gauss-Legendre ends up often working well in the face of singularities even though the derivatives at the endpoints become unbounded.

Looking at your previous comment I worry whether this approach to numerical quadrature is going to help much if your function is defined by Taylor polynomials of degree $3$ or $4$ over subdomains. If you just integrate over a region containing several subdomains, doesn't that mean that some higher order derivative will be discontinuous so that there are effectively multiple singularities over the interval of integration? Since you know the small degree polynomial you are using over each subdomain, why can't you just integrate that polynomial exactly over the subdomain and add up the results over all subdomains?

Related Question