I understand the dual space to be a vector space of linear functionals that takes in vectors and spits out scalars. I'm struggling to understand what this actually means and more specifically what it means for the dual basis. I think at a high level if we have a basis vector and we find its dual basis then we can feed the dual basis a vector via the inner product and it will spit out a scalar that will tell us how much of our original basis is in the representation of vector with respect to the original basis but I'm not sure how to interpret actual results when going through these operations. If I perform an inner product with a dual basis and some vector x and the result is 6, what does that actually mean? Additionally, I understand that rows in a matrix in general can be thought of as linear functionals that tell us how much of something is in the output. How exactly is this related to the dual basis? It almost seems like the dual basis is functionally equivalent to the transpose of a matrix but this doesn't seem to be the case if I look at actual examples of calculating dual basis.
Dual space as linear functional
linear algebra
Related Solutions
"Self-dual" is not commonly used as a term with a precise definition in this context. When someone says that a vector space $V$ is self-dual, that normally means (at a minimum) that there exists an isomorphism $V\to V^*$ from $V$ to its dual space. Depending on context, it may also mean that a specific such isomorphism has been chosen, or that there is a specific canonical such isomorphism which can be defined in terms of some extra structure that $V$ has.
So in the first, weakest sense, where you just say there exists an isomorphism, every finite-dimensional vector space is self-dual. Given a vector space $V$ with a bilinear form $\langle \cdot,\cdot\rangle:V\times V\to\mathbb{R}$, there is a canonical map $f:V\to V^*$ which takes $v\in V$ to the functional $w\mapsto\langle v,w\rangle$. If the bilinear form is nondegenerate, then $f$ is injective. If $V$ is additionally finite-dimensional, then $f$ is automatically surjective as well. So any finite-dimensional vector space with a nondegenerate bilinear form (e.g., an inner product) is self-dual in the stronger sense of having a canonical isomorphism to its dual determined by its extra structure.
(In fact, conversely, an isomorphism $V\to V^*$ determines a nondegenerate bilinear form by reversing the construction above, so fixing such an isomorphism on a finite-dimensional vector space is equivalent to choosing a nondegenerate bilinear form.)
The answer to your questions depends upon your answer to this question: do you see $\mathbb C$ as a real vector space or as a complex vector space?
Since you mention $\mathbb{R}^2$, I'll assume that you see it as a real vector space. In that case:
- A basis of $\mathbb{C}^*$ is $\{\alpha,\beta\}$, with $\alpha(z)=\operatorname{Re}z$ and $\beta(z)=\operatorname{Im}z$.
- The functionals get their values in $\mathbb R$.
On the other hand, if you see $\mathbb C$ as a complex vector space, then you can take any map $\alpha\colon\mathbb{C}\longrightarrow\mathbb{C}$ of the type $\alpha(z)=az$ (as long as $a\neq0$) and $\{\alpha\}$ will be a basis of $\mathbb{C}^*$. For instance, you can take $\alpha=\operatorname{Id}$ (which corresponds to choosing $a=1$).
Best Answer
Let's say we have a basis $v_1, \dots, v_n$ of the vector space $V$ (could be $V= \mathbb R^n$ for example). Then the dual basis is often written as $v_1^\vee, \dots, v_n^\vee$ where the definition of the functional $v_i^\vee$ is: $$v_i^\vee(v_j) = \delta_{ij}$$ where $\delta_{ij}$ is the Kronecker-delta ($\delta_{ij}=1$ iff $i=j$, $\delta_{ij}=0$ otherwise). First of all this completely defines $v_i^\vee$ as $v_1, \dots, v_n$ is a basis of $V$, so $v_i^\vee$ is defined by giving values of $v_i^\vee$ for a basis. By linearity that defines $v_i^\vee$.
Now for the interpretation part: if you have $v_i^\vee(w) = 6$ for some vector $w$ that means that in the representation $$w=\sum_{j=1}^n c_jv_j$$ the coefficient $c_i$ has to be $6$. Because $$v_i^\vee(w) = \sum_{j=1}^n c_j v_i^\vee(v_j)=c_i$$ by linearity of $v_i^\vee$. That is the formal way of explaining the "how much of $v_i$ was in the input".
Now for the transposing part: You can interpret the calculation of $v_i^\vee(w)$ as dot product or multiplication of a row vector with a columns vector as follows: The linear map $v_i^\vee$ maps the $i$-th basis vector to $1$ and the other basis vectors to $0$. So if $w= \sum_{j=1}^n c_jv_j$ we have $$v_i^\vee(w) = (0,\dots, 0, 1, 0, \dots, 0) \cdot \begin{pmatrix} c_1\\ \vdots \\c_n \end{pmatrix}$$ This gives a useful tool to evaluate other linear functionals in the dual space $V^\vee$. Then the row vector is just the coefficients of the functional $\phi$ in the expression $$\phi = \sum_{k=1}^n a_k v_k^\vee$$ In other words, $v_i^\vee$ corresponds to $e_i^T$ and the column vector that you multiply is the vector of coefficients in the representation of the vector $w$ in the basis $v_1, \dots, v_n$