Why are infinite-dimensional vector spaces usually equipped with additional structure

functional-analysislinear algebrasoft-questionvector-spaces

In a first course in linear algebra, it is common for instructors to mostly restrict their attention to finite-dimensional vector spaces. These vector spaces are usually not assumed to be equipped with any additional structure, such as an inner product, norm, or a topology. On the other hand, it seems that when infinite-dimensional vector spaces are encountered in later courses, it is much more common to equip with them additional structure. Why is this? A partial answer might be that infinite-dimensional vector spaces are often studied in functional analysis, where extra structure is needed to properly define analytical concepts such as infinite series. However, I would have expected that "pure" infinite-dimensional vector spaces have a use in some area of mathematics.

I have now also asked this on MathOverflow.

Best Answer

Consider the following example.

Vectors over, say, the field $\mathbb{R}$, with finite dimension $n$, we'd like to prototypically understand as lists, or tuples:

$$\mathbf{v} = (a_1, a_2, \cdots, a_n).$$

Note here that the right hand is literally a tuple of reals; it is the "meat" of what the objects in the space "$\mathbb{R}^n$" "really are". The reals $a_j$ are the components of the vector, which are not the same concept as coordinates, which refer to the values $b_j$ when $\mathbf{v}$ is expressed in a basis $\mathcal{B}$:

$$\mathbf{v} = b_1 \mathbf{b}_1 + b_2 \mathbf{b}_2 + \cdots + b_n \mathbf{b}_n$$

where $\mathcal{B} = \{ \mathbf{b}_1, \mathbf{b}_2, \cdots, \mathbf{b}_n \}$. Given a basis, any vector has coordinates, but not all vectors have components, because the elements of the actual vector space set need not be just tuples.

But here's the thing: it's quite obvious that, in this case, we have a basis where that the coordinates and components are identical, namely:

$$\begin{matrix} \mathbf{e}_1 := (1, 0, 0, \cdots, 0)\\ \mathbf{e}_2 := (0, 1, 0, \cdots, 0)\\ \cdots\\ \mathbf{e}_n := (0 ,0, 0, \cdots, 1)\end{matrix}$$

Then the vector $(a_1, a_2, \cdots, a_n) = \sum_{j=1}^{n} a_j \mathbf{e}_j$ and the coordinates and components are identical. We call this the standard basis for $\mathbb{R}^n$.

Now, consider infinite dimension. It's quite natural to extend vectors with a finite tuple of $n$ reals, to an infinite tuple or perhaps as some might be more comfortable saying, a "sequence" (they are "equivalent" though one might call them different "data types"):

$$\mathbf{v} = (a_1, a_2, a_3, \cdots)$$

where there are now infinitely many components $a_j$. We'd like, then, by analogy with before, to write down a basis in the form

$$\begin{align} \mathbf{e}_1 := (1, 0, 0, 0, \cdots)\\ \mathbf{e}_2 := (0, 1, 0, 0, \cdots)\\ \mathbf{e}_3 := (0, 0, 1, 0, \cdots)\\ \cdots\end{align}$$

so that we could say

$$\mathbf{v} = \sum_{i=1}^{\infty} a_i \mathbf{e}_i.$$

The problem though, is that by definition, an infinite summation like this requires us to take the following limit:

$$\mathbf{v} = \lim_{n \rightarrow \infty} \left(\sum_{i=1}^{n} a_i \mathbf{e}_i\right).$$

And the thing in the brackets is a vector, so what we have is a limit of vectors.

But limits don't make sense unless you have some way to compare vectors for notions like proximity, or "approximate-ness", or what have you!

Thus, if we want to be able to use the "basis" we have given above, we must add some extra structure to the space, which specifies how that limits or approximations are to work, i.e. "who is within some tolerance of who", so that the infinite sum is both defined at all, and so that it generates the result we'd like it to generate.

So no, the vector space itself doesn't "need" a topology, but suitably building one on it lets us do a lot of stuff we'd "like" to be able to do but otherwise wouldn't.