In linear algebra, I've always thought about the notion of basis as something related to vector spaces, i.e., given a (real) vector space $V$, a basis is a (minimal) set of linear independent vectors $\mathcal{B} = \{\textbf{v}_1, \dots , \textbf{v}_n\}$ such that $V = Span(\mathcal{B})$. I was trying to understand what does exactly mean when people write the term basis when talking about linear maps. Let's say we fix two vector spaces $V$, $W$ with bases $\mathcal{B}_V$ and $\mathcal{B}_W$. Now if we consider the collection of linear maps $\mathcal{L} = \{L:V \rightarrow W \}$, how a basis for $\mathcal{L}$ can be formally defined?
Basis for the space of linear maps.
linear algebra
Related Solutions
The proof every vector space has a basis uses the axiom of choice; the proof is similar to that of the well-ordering theorem. Let $f$ be a choice function on the set of nonempty subsets of $V$. By transfinite recursion we define for ordinals $\alpha$ a function $e\left(\alpha\right) :=f(V\backslash\text{span}\left\{ e\left(\beta\right)|\beta\in\alpha\right\} )$. This definition malfunctions iff the argument of $f$ is empty, i.e. we define $e\left(\alpha\right)$ until the span covers all of $V$. This is guaranteed to eventually happen, since otherwise one could inject the ordinals into the set $V$.
Replacing two basis elements $e_1,\,e_2$ with $a e_1\pm b e_2,\,a\neq 0\neq b$ is the usual way of showing a basis isn't unique. For this argument to work, we need the same span from this alternative. Certainly $2a e_1,\,2b e_2$ are included. In fields of characteristic 2, such as $\mathbb{F}_2$ (discussed in the above comments), we can't then divide by $2a,\,2b$ to finish the proof, because $2:=1+1=0$.
Let's talk about vectors of real numbers with two elements in them. The set of these vectors is named $\mathbb{R}^2$. Consider $$\begin{bmatrix}1\\ 0\end{bmatrix} \text{and} \begin{bmatrix}0\\ 1\end{bmatrix}.$$
Can any other two element vector be written as a combination of these two vectors? Yes, it can! Consider any vector $\begin{bmatrix}a\\ b\end{bmatrix}$ for any values $a$ and $b$. $\begin{bmatrix}a\\ b\end{bmatrix}=a\begin{bmatrix}1\\ 0\end{bmatrix} + b\begin{bmatrix}0\\ 1\end{bmatrix}.$ Since any two element vector can be written as a linear combination of these two vectors, and we need both of these vectors, then these vectors form a basis.
Formally, we say that $\left\{\begin{bmatrix}1\\ 0\end{bmatrix},\begin{bmatrix}0\\ 1\end{bmatrix}\right\}$ is a basis of $\mathbb{R}^2$. We could also say that $\left\{\begin{bmatrix}1\\ 0\end{bmatrix},\begin{bmatrix}0\\ 1\end{bmatrix}\right\}$ generates $\mathbb{R}^2$.
What about the vectors $$\begin{bmatrix}1\\ 0\end{bmatrix}, \begin{bmatrix}0\\ 1\end{bmatrix} \text{and} \begin{bmatrix}1\\ 1\end{bmatrix}?$$ We could also write any vector $\begin{bmatrix}a\\ b\end{bmatrix}$ as a linear combination of these three vectors. However, we don't need all three of them. We could just as easily work with just the first two vectors. So the set of these vectors is not a basis of $\mathbb{R}^2$. However, it still generates $\mathbb{R}^2$ because $\text{span}\left(\left\{\begin{bmatrix}1\\ 0\end{bmatrix},\begin{bmatrix}0\\ 1\end{bmatrix},\begin{bmatrix}1\\ 1\end{bmatrix}\right\}\right)=\mathbb{R}^2$. Therefore, the dimension of $\mathbb{R}^2$ is two, because any basis set of $\mathbb{R}^2$ has two vectors in it.
Can we have a basis other than $\left\{\begin{bmatrix}1\\ 0\end{bmatrix},\begin{bmatrix}0\\ 1\end{bmatrix}\right\}$? Yes, we can. Consider $$\begin{bmatrix}1\\ 1\end{bmatrix} \text{and} \begin{bmatrix}0\\ 1\end{bmatrix}.$$ Any vector $\begin{bmatrix}a\\ b\end{bmatrix}$ can be written as a linear combination of these two vectors: $$ a\begin{bmatrix}1\\ 1\end{bmatrix} + (b-a)\begin{bmatrix}0\\ 1\end{bmatrix}=\begin{bmatrix}a\\ b\end{bmatrix}.$$ So $\left\{\begin{bmatrix}1\\ 1\end{bmatrix},\begin{bmatrix}0\\ 1\end{bmatrix}\right\}$ is also a basis of $\mathbb{R}^2$.
Imagine trying to send a vector space through a communication channel. If we had to send every vector then this would be an impossible thing to do because we would need an infinite number of vectors. What could we do instead? If we had a basis for our vector space, we could just send the basis and tell the person on the other end that the vectors we sent are a basis for our vector space. And that's the intuition. A basis is a small set of vectors that completely represents our vector space. In fact, there's no smaller set of vectors that completely represents our vector space.
Best Answer
The definition of basis is the same: a linearly independent set which also spans. An example of a basis, induced by the bases on $V,W$, is the collection $\{T_{ij}: 1\leq i\leq n, 1\leq j\leq m\}$ where $T_{ij}:V\to W$ is the unique linear map such that for all $k\in\{1,\dots, n\}$, \begin{align} T_{ij}(v_k)&= \begin{cases} w_j&\text{if $k=i$}\\ 0 & \text{else} \end{cases} \end{align} It is a standard, but useful and very informative exercise for you to prove that this collection of maps actually forms a basis for the space $\text{Hom}(V,W)$ for all linear maps from $V$ into $W$.