Gram-Schmidt Process – The Need and Importance

gram-schmidtlinear algebraorthonormal

As far as I understood Gram–Schmidt orthogonalization starts with a set of linearly independent vectors and produces a set of mutually orthonormal vectors that spans the same space that starting vectors did.

I have no problem understanding the algorithm, but here is a thing I fail to get. Why do I need to do all these calculations? For example, instead of doing the calculations provided in that wiki page in example section, why can't I just grab two basis vectors $w_1 = (1, 0)'$ and $w_2 = (0, 1)'$? They are clearly orthonormal and span the same subspace as the original vectors $v_1 = (3, 1)'$, $v_2 = (2, 2)'$.

It is clear that I'm missing something important, but I can't see what exactly.

Best Answer

Orthonormal bases are nice because several formulas are much simpler when vectors are given wrt an ON basis.

Example: Let $\mathcal E = \{e_1, \dots, e_n\}$ be an ON basis. Then the Fourier expansion of any vector $v\in\operatorname{span}(\mathcal E)$ is just $$v = (v\cdot e_1)e_1 + (v\cdot e_2)e_2 + \cdots + (v\cdot e_n)e_n$$

Notice that there are no normalization factors and we don't need to construct a dual basis -- it's just a really simple formula.

In your example, of course $\{(1,0),(0,1)\}$ spans the same space as $\{(3,2),(2,2)\}$. But let me provide an example of my own: what about $\{(1.1,1.2,0.9,2.1,4),(3,-2,6,14,2),(6,6,6,3.4,11.1)\}$? There's certainly no subset of the standard basis vectors that spans the same space as these linearly independent vectors. But this is a pretty poor choice of basis because they're not orthonormal. It'd sure be nice if we had some algorithm that could produce as ON basis from them...

Related Question