[Math] Intuitive idea of Vector space of function

linear algebravector-spaces

I am studying vector space from Linear Algebra Done Right by Sheldon Axler.

Going by the notion of vectors I started to visualize elements as vectors of $n$ dimension for a vector space $R^n$. I could visualize the idea of closure property w.r.t linear combination of $n$ dimensional vectors.

But I could not understand $F^S$ properly. They have taken example :

$F = R\;$ and $S = \left [ 0,1 \right ]$ and said that $R^S$ is a vector space. I could not co-relate this function space to the vector form.

Please explain a little bit about vector space in terms of function. or more formally a more generic view of vector space.

Best Answer

There are ways in which $F^S$ relates to $F^n$, and ways in which it doesn't.

Each element of $F^S$ is determined by taking each element of $S$ and assigning it a value from $F$. This is, by definition, an $F$-valued function on $S$.

Each element of $F^n$ is an $n$-tuple of the form $(f_1, \ldots, f_n)$, with $f_i \in F$. This can be reinterpreted as an $F$-valued function on a set with $n$ elements. So there is something that makes $F^n$ resemble $F^S$.

What is very dangerous, however, is to go further and think of the function values in each element of $F^S$ as coordinates, which is not correct. Take a look at an element of $F^n$ of the form $v = (f_1, \ldots, f_n)$. Each $f_i$ is a coordinate of $v$ with respect to the basis $e_i$ ($0$ everywhere except in the $i$-th position). So in the finite case, coordinates and function values coincide. Everything works out.

However, consider an element $w \in F^S$. Its values are determined by $w(s)$ for $s \in S$, so its tempting to say that these are the coordinates of $w$ with respect to a basis of functions that take the value $1$ at particular $s$ and $0$ elsewhere. The problem with this claim is that if $S$ is an infinite set, and $w$ is a function that is everywhere nonzero, then $w$ has infinitely-many nonzero coordinates, which would mean it is expressed as an infinite sum of basis vectors. Infinite sums in linear algebra are not allowed, because they require notions of convergence conferred by calculus, notions that may or may not exist for generic vector spaces. There are ways to introduce infinite sums into linear algebra, but it's a whole different story.

So while it is true that $F^S$ is a vector space, if $S$ is an infinite set, then its basis cannot be expressed in a simple way using the values each function takes on $S$. Indeed, any such basis requires some form of the axiom of choice to describe.

All of this can be summarized in the following statement:

Let $V$ be a vector space over $F$ with a basis $\beta$. Then $V$ is isomorphic to $F^{\beta}$ if and only if $\beta$ is a finite set, a.k.a. $V$ is finite-dimensional.