Fourier Series – Justification for Describing Functions as ‘Orthogonal’

fourier seriesterminology

When introducing Fourier series, my lecturer stated that 2 periodic functions, $f$ and $g$, with period $2L$ are orthogonal iff $$\int^{L}_{-L}{f(x)g(x)}\mathrm dx=0$$ Wikipedia agrees, even defining the above integral to be the inner product of $f$ and $g$, completely analogous with the terminology from linear algebra. My lecturer hinted that there is a reason for the terminology being the way it is, but personally I'm having a bit of a hard time seeing what the above integral has to do with orthogonality as I would understand it.

On a related note, does this connection (between orthogonal functions and orthogonality more generally, assuming there is one) in some way 'explain' why the sine and cosine functions can be used to construct any continuous functions in a Fourier series, while other sets of functions cannot. Currently it seems the orthogonality plays an almost magical role in deriving the coefficient equations, I'm wondering if there's something deeper there.

Thanks!

Best Answer

Indeed, there is deep theory involved behind all of this. When you learn more about inner product/Hilbert spaces, you will see spaces that have a certain operation on them called an inner product whose axioms/properties you can find in the link. These spaces are interesting because inner products give us a sense of "angle" or "direction" (note how its properties generalize the dot product of Euclidean space). Further, we can define a norm on that space with this inner product: $ \|x\| = \sqrt{\langle x,x \rangle}$ and then this norm gives us an idea of the "length" of a vector in this space.

Then we define a metric on this space with $ d(x,y) = \|x-y\|$ and this metric gives us an idea of distance between two points (which leads us to think about convergence of sequences and other analysis-type questions). This metric also then allows us to generalize open sets which endows a sense of whether some points are "close together" or "split apart", and these open sets then induce a topology, which allows us to study topological properties such as continuity and connectedness.

Clearly that is just an overload of new information, so you may wonder - what does it all mean? Well essentially each of these stages are abstractions that are structurally similar to the well known Euclidean space. Hilbert spaces are the most similar to $\mathbb{R}^n$ - in them we have all the ideas above, of angle, direction, magnitude of a vector, distance between vectors, neighbourhoods of vectors, and convergence of every Cauchy sequence. This is why Hilbert spaces are so interesting - we have plenty of intuition about how they should behave, they have enough structure on them to give us plenty of useful results, but they are general enough to be substantially different from $\mathbb{R}^n$.

One example of a Hilbert space is the Lebesgue square-integrable functions with the inner-product $\langle f,g \rangle = \int_X f\cdot \bar g\; \mathrm d\mu$, and you are considering one of them. The integral in your post corresponds to the inner product of this space (remember, this is the generalization of a dot product) and just as when the dot product of Euclidean vectors was $0$ we declared them orthogonal, we do the same here. As anon commented, when two functions are orthogonal they are linearly independent, which is a familiar property. If you have enough linearly independent vectors chosen wisely to form a basis, we should be able to form any vector in this space as a linear combination of the basis vectors - that is a Fourier series.

Related Question