[Math] Projection of a function onto another

functionsinner-productsprojective-geometryvector-spacesvectors

In a metric space, I can imagine projection of a vector onto another (also called dot product). Or in general any vectors of form $[v_1,v_2,v_3,\cdots]$ I know how to compute their projection onto one another. But for continuous vectors, like when using functions as a vectors, I can't imagine what does dot-product (or inner product) or projection quite means. Can anyone help me with this?

Best Answer

Let's reconsider the familiar vector space $\mathbb R^2$ for a moment, examine the properties of that space and how they might inform an intuition about vector spaces of functions, and try to distinguish good intuition from harmful intuition.

Consider the vectors $(1,1)$ and $(2,-2).$ If you look only at their $x$ coordinates, the vectors appear to be in the same direction. If you look only at $y$ coordinates, the vectors appear to be opposite. In fact, the vectors are orthogonal, but to find this from their coordinates you must look at all of their coordinates.

If all you know about a vector is its $x$ coordinate, you know very little about its direction. It could be almost straight up, almost straight down, or anything in between.

Trying to compare functions $f(x)$ and $g(x)$ by looking at only one value of $x$ is worse than trying to find the angles of vectors by looking at only one coordinate. To decide whether functions are orthogonal you have to look at the entire region over which the inner product is measured, and find out whether the functions tend to imitate one another, oppose one another, or neither.

Another property of vectors in $\mathbb R^2$ is that if you project an arbitrary vector onto each of two orthogonal vectors, and then add the two projected vectors, you get back the original vector. This does not work if the vectors you project onto are not orthogonal. But it does work for projecting a function onto other functions, provided that the functions you project onto are orthogonal. (It also may require an infinite set of such functions to project onto, because the typical function vector spaces we look at have that many dimensions.)

Consider the space of continuous real-valued functions $f$ such that $f(x+2\pi)=f(x),$ with the inner product $$ \langle f,g \rangle = \frac{1}{2\pi} \int_0^{2\pi} f(x)g(x)\,dx. $$

The sine and cosine functions are in this space and are orthogonal. Although they are almost always both non-zero, they oppose each other as much as they reinforce each other, and the integral comes out to exactly zero.

For $0<\alpha<\frac\pi2,$ the function $\sin(x+\alpha)$ also is in this space but is not orthogonal to either $\sin(x)$ or $\cos(x).$ Instead, its inner product with $\sin(x)$ is $\frac12\cos(\alpha)$ and its inner product with $\cos(x)$ is $\frac12\sin(\alpha).$

But if we take the inner product of $\sin(x)$ with itself, we get $\frac12,$ and the same is true for $\cos(x).$ So if we use the usual formula for projecting a vector $v$ onto a vector $u,$ $$ \left(\frac{\langle v,u \rangle}{\langle u,u \rangle}\right) u, $$ and use it to project $\sin(x+\alpha)$ onto the sine and cosine functions, we get (respectively) $$ \cos(\alpha)\sin(x) \qquad \text{and} \qquad \sin(\alpha)\cos(x). $$ And as we know from the identity for the sine of an angle sum, the sum of these two functions gives us back the function $\sin(x+\alpha).$ So this particular intuition from the vector space $\mathbb R^2$ does carry over at least somewhat into at least one inner product space of functions. If you look at the derivation of a Fourier series for a periodic function, this idea of adding the projections onto orthogonal functions works quite well.

Related Question