Why are orthogonal polynomials “unique”

analysislinear algebranumerical methods

I am reading Numerical Analysis by Walter Gautschi, and the autor says (I quote):

[Talks about $\{1, t, t^2, …, t^n \}$] Since a linearly independent set can be orthogonalized by Gram-Schmidt, any measure $d \lambda$ of the type considered generates a unique [emphasis mine] set of (monic) orthogonal polynomials $\pi_j(t) = \pi_j(t; d \lambda)$, $j=0, 1, 2, …,$ satisfying

$$\deg \pi_j = j, j = 0, 1, 2, …,$$
$$\int_{\mathbb{R}} \pi_k(t) \pi_{\ell}(t) d \lambda (t) = 0 \text{ if } t \not = \ell.$$

There are called orthogonal polynomials relative to the measure $d \lambda$.

From what I understand, the author is saying that (for example) if we take the space spanned by $1, t, t^2$ and we have a fixed inner product $(u, v) = \int_a^b u(t)v(t)w(t) dt$, then there exists a unique set of three monic, orthogonal polynomials which span this space. Furthermore, the author seems to say that this follows directly from the Gram-Schmidt process.

I can sort of see a reason why these polynomials are unique; having made an orthogonal set $\{p_0, p_1, …, p_{n-1} \}$ of $n$ polynomials, we want the $p_{n}$ (which has $n+1$ coefficients) to satisfy
$$(p_k, p_n) = 0 \text{ for } k = 0, 1, …, n-1$$
$$\text{leading coefficient = 1}$$

which is a system of $n+1$ equations in $n+1$ variables, so it could just have a unique solution. But I don't see why this system has to be non-singular.

Best Answer

I think you missread the line, which is not written to well.

What the line is probably saing is the following: You have a vector space $V = \mbox{span} \{ 1, t,.., t^n \}$ and an inner product for this space, $ \langle f, g \rangle = \int_{\mathbb R} p(t) q(t) d \lambda(t)$.

Now, in this situation, given any ordered basis (in this case $(1,t,.., t^n)$) the GS produces an unique ONB.

But if you change the basis , or even their order, you would generally get a different set with the same properties.

Related Question