An equality in Evans’ PDE book chapter 9

analysiscalculuspartial differential equationsreal-analysissobolev-spaces

The following are the necessary background definitions and steps in his book(p. 529-530, PDE, second edition).


More precisely, assume that the functions $w_{k}=w_{k}(x)$ $(k=1, \ldots)$ are smooth and
$$
\left\{w_{k}\right\}_{k=1}^{\infty} \text { is an orthonormal basis of } H_{0}^{1}(U)
$$

taken with the inner product $(u, v)=\int_{U} D u \cdot D v d x$. (We could for instance take $\left\{w_{k}\right\}_{k=1}^{\infty}$ to be the set of appropriately normalized eigenfunctions for $-\Delta$ in $\left.H_{0}^{1}(U) \cdot\right)$
We will look for a function $u_{m} \in H_{0}^{1}(U)$ of the form
$$
u_{m}=\sum_{k=1}^{m} d_{m}^{k} w_{k}\tag{6}
$$

where we hope to select the coefficients $d_{m}^{k}$ so that
$$
\int_{U} \mathbf{a}\left(D u_{m}\right) \cdot D w_{k} d x=\int_{U} f w_{k} d x \quad(k=1, \ldots, m)\tag{7}
$$

THEOREM 1 (Construction of approximate solutions). For each integer $m=1, \ldots$, there exists a function $u_{m}$ of the form (6) satisfying the identities (7).

Proof. Define the continuous function $\mathbf{v}: \mathbb{R}^{m} \rightarrow \mathbb{R}^{m}, \mathbf{v}=\left(v^{1}, \ldots, v^{m}\right)$, by setting
$$
v^{k}(d):=\int_{U} \mathbf{a}\left(\sum_{j=1}^{m} d_{j} D w_{j}\right) \cdot D w_{k}-f w_{k} d x \quad(k=1, \ldots, m)\tag{10}
$$

for each point $d=\left(d_{1}, \ldots, d_{m}\right) \in \mathbb{R}^{m}$. Now
$$
\begin{aligned}
\mathbf{v}(d) \cdot d &=\int_{U} \mathbf{a}\left(\sum_{j=1}^{m} d_{j} D w_{j}\right) \cdot\left(\sum_{j=1}^{m} d_{j} D w_{j}\right)-f\left(\sum_{j=1}^{m} d_{j} w_{j}\right) d x \\
& \geq \int_{U} \alpha\left|\sum_{j=1}^{m} d_{j} D w_{j}\right|^{2}-\beta-f\left(\sum_{j=1}^{m} d_{j} w_{j}\right) d x \quad \text { by }(5) \\
&=\alpha|d|^{2}-\beta|U|-\sum_{j=1}^{m} d_{j} \int_{U} f w_{j} d x
\end{aligned}
$$

Transcribed from Screenshot1 and
Screenshot2


My question is, why the last equality can hold. To be more specific, why do we have that $$ \int_U \vert \sum_{j=1}^m d_j Dw_j \vert^2 dx=\vert d \vert^2$$, where $Dw_j=(\partial_{x_1} w_j, … , \partial_{x_n} w_j).$

My attempt: $$\vert \sum_{j=1}^m d_j Dw_j \vert^2=\vert (\sum_{j=1}^m d_j \partial_{x_1} w_j, … , \sum_{j=1}^m d_j \partial_{x_n} w_j) \vert^2 = (\sum_{j=1}^m d_j \partial_{x_1} w_j)^2+ … + (\sum_{j=1}^m d_j \partial_{x_n} w_j)^2.$$

Up to this point, I don't know how to proceed to prove that it is greater than or equal to $\vert d \vert ^2$. Any help will be appreciated.

Best Answer

The result follows from $\{w_j\}$ being an orthonormal set of vectors in $H^1_0(U)$ with inner product given by $$(u,v)=\int_U Du\cdot Dv dx. $$ Indeed, \begin{align*} \int_U \bigg \vert \sum_{i=1}^m d_i Dw_i \bigg \vert^2 d x &= \int_U \bigg ( \sum_{i=1}^m d_i Dw_i \bigg ) \cdot \bigg ( \sum_{j=1}^m d_j Dw_j \bigg ) d x \\ &=\sum_{i,j=1}^m d_i d_j \int_U D w_i \cdot D w_j d x \\ &= \sum_{i,j=1}^m d_i d_j \delta_{ij} \\ &= \vert d \vert^2 \end{align*} where $\delta_{ij}$ is the Kronecker delta.