[Math] a particular use of Gram-Schmidt orthogonalization

gram-schmidtlinear algebramatrices

We have a linear space V of $m \times n$ matrices. I know that we can use Gram-Schmidt to construct an orthonormal basis, but the natural basis for this space (where every ij-th element is $1$ and the rest $0$) is just that – every matrix there is orthogonal to the rest, and each norm equals $1$.

Where does the algorithm come into use? Why would somebody go through the trouble of constructing a new basis when the natural one fits the bill?

Best Answer

Consider the case where the linear space $V$ is a sub-space of a larger space $W$. Using Gram-Schmidt, we can construct an orthonormal basis of $V$, without leaving $V$, i.e. not involving any vectors $W \setminus V$. There might not be a canonical basis of $V$ at all.

One applications are orthonormal bases of Eigenspaces of symmetric matrices. Eigenspaces to different eigenvectors are orthonormal to each other, but within one space, one needs to do something if you want to have an orthonormal basis of the whole space.

As a more concrete example: What's the 'natural' or canonical basis of $\operatorname{span}\Bigg\{\begin{pmatrix}1\\0\\1\end{pmatrix}, \begin{pmatrix}1\\1\\0\end{pmatrix}\Bigg\}$?

Related Question