Convergence of the Jacobi method

numerical methods

Consider this question

The function $u(x)=x(x-1), 0 \leq x \leq 1$, is defined by the equations $u^{\prime \prime}(x)=2,0 \leq x \leq 1$, and $u(0)=u(1)=0$. A difference approximation to the differential equation provides the estimates $u_m \approx u(m h), m=1,2, \ldots, M-1$, through the system of equations
$$
\left\{\begin{array}{l}
u_{m-1}-2 u_m+u_{m+1}=2 h^2, \quad m=1,2, \ldots, M-1 \\
u_0=u_M=0
\end{array}\right.
$$

where $h=1 / M$, and $M$ is a large positive integer. Show that the exact solution of the system is just $u_m=u(m h), m=1,2, \ldots, M-1$.
We employ the notation $u_m^{(\infty)}=u(m h)$, because we let the system be solved by the Jacobi iteration, using the starting values $u_m^{(0)}=0, m=1,2, \ldots, M-1$. Prove that the iteration matrix $H$ has the spectral radius $\rho(H)=\cos (\pi / M)$. Further, by regarding the initial error vector $\boldsymbol{u}^{(0)}-\boldsymbol{u}^{(\infty)}$ as a linear combination of the eigenvectors of $H$, show that the largest component of $\boldsymbol{u}^{(k)}-\boldsymbol{u}^{(\infty)}$ for large $k$ is approximately $\left(8 / \pi^3\right) \cos ^k(\pi / M)$. Hence deduce that the Jacobi method requires about $2.5 M^2$ iterations to achieve $\left\|\boldsymbol{u}^{(k+1)}-\boldsymbol{u}^{(\infty)}\right\|_{\infty} \leq 10^{-6}$.

I have done everything except

Further, by regarding the initial error vector $\boldsymbol{u}^{(0)}-\boldsymbol{u}^{(\infty)}$ as a linear combination of the eigenvectors of $H$, show that the largest component of $\boldsymbol{u}^{(k)}-\boldsymbol{u}^{(\infty)}$ for large $k$ is approximately $\left(8 / \pi^3\right) \cos ^k(\pi / M)$. Hence deduce that the Jacobi method requires about $2.5 M^2$ iterations to achieve $\left\|\boldsymbol{u}^{(k+1)}-\boldsymbol{u}^{(\infty)}\right\|_{\infty} \leq 10^{-6}$.

Here is what I tried: Expand in terms of an orthonormal eigenbasis
$$
u^0 -u^\infty = \sum a_i v^i,
$$

where
\begin{align*}
\mathbf{v}^k=\sqrt{\frac{2}{M}} \begin{pmatrix} \sin \frac{\pi k}{m} \\
\vdots \\
\sin \frac{\pi k n}{M} \end{pmatrix} : k = 1,\dots ,M-1
\end{align*}

as the matrix in this scheme is triangular, symetric and toeplitz.

Then let $\lambda_i$ be the associated eigenvalue with each eigenvector. Whence
$$
u^k -u^\infty = H^k \sum a_i v^i = \sum a_i \lambda_i^k v^i \sim a_1 \cos^k (\pi/M) v^1.
$$

So I need to find $a_1$. We have
$$a_1 = (u^0 – u^\infty) \cdot v^1 = -\sqrt{\frac{2}{M}} (h(h-1) \sin \pi h + (h+1) h \sin \pi h +…)$$
since
$$
u^0 – u^\infty = – (u(h) \quad u(2h) \quad … \quad u(h(M-1))^T.
$$

From here on I am struggling to reduce it to the desired form. Moreover, it mentions "approximetly" which I think I should interpret as in asymptotically?

Also, for the very last part, I am struggling to understand what "about" means as all of the above seems to be in the context of asymptotics. It makes it difficult for me to judge the accuracy of my expansions .

Question: If anybody could shine some light on how to get the $8/ \pi ^3$ and about the "about" in the last part I would be very greatefull.

Best Answer

Yes, the author meant asymptotically. You just need to express $u^{(0)}-u^{(\infty)}$ in the $v$ basis. I'll use a different normalisation: $$ v^m = (\sin(\pi mn/M))_{1\leq n \leq M-1} $$ Since you only need one component, it would be easier to project using a suitable dot product. This is where the $M\to\infty$ limit is easier (the continuous limit). The natural dot product is the usual $L^2$ norm: $$ (f,g) = \int_0^1f(x)g(x)dx $$ Indeed, this makes the operator symmetric and the eigenbasis orthogonal. Since you know what continuous function you are approximating in the continuous limit, you can just use: $$ \begin{align} (u^{(0)}-u^{(\infty)},v^1) &= \int_0^1 x(1-x)\sin(\pi x)dx \\ &= \frac{4}{\pi^3} \end{align} $$ Generally, you have the expansion: $$ u^{(0)}-u^{(\infty)} = \sum_{n\in\mathbb N^*,\text{ $n$ odd}} \frac{8}{(\pi n)^3}\sin(\pi nx) $$ This therefore gives you the prefactor of the convergence rate in the continuous limit.

For finite $M$, you'll need to properly define the dot product. You still want the $v$ to be orthongonal, so this defines your dot product and you just need to caluculate the basis change. Alternatively, it is easier to reduce to the $L^2$ norm. The usual approach is to define the appropriate interpolation of your functions sampled on the $(m/M)_{1\leq m \leq M-1}$ so that the dot product is still the $L^2$ norm, while making the $v$ orthogonal. The idea is to use the space of truncated expansion in $v$. Thus, the projection is that same as in the continuous case, and the previous calculation still applies.

Hope this helps.