Your proof is incomplete, since you have not shown that every element in the span of $\{\beta_1, ..., \beta_n\}$ is also contained in $R_T$.
To elaborate:
By showing that for any arbitrary $y ∈ R_T$
it is also true that $y∈ $ span$~(\beta_1, ..., 𝜷_n)$,
you have shown that $R_T ⊆ $ span$~(\beta_1, ..., \beta_n)$.
To prove equality, we need to show the reverse inclusion as well:
Given an element $z∈ $ span$(\beta_1, ..., \beta_n)$
we need to show that it is an element of $R_T$.
Let me try to illustrate something by rewriting a part of your argument.
Take $f\in V^*$ and $\alpha\in V$. Assume $\alpha = \sum x_i\alpha _i$, where $x_i\in F$ and $\alpha_i\in B$ $^{(1)}$. By linearity of $f$ we have
$$ f(\alpha) = f\left (\sum x_i\alpha _i \right ) = \sum x_i f(\alpha _i). $$
Let $h:V\to F^n$ be the canonical isomorphism (with respect to $B$) and $\pi _i :F^n\to F$ the projection of the $i$-th coordinate. Put $f_i := \pi _ih$ for all $i$ and note that
$$
\begin{align*}
\left (\sum f(\alpha _i)f_i\right )(\alpha) = \sum f(\alpha _i)f_i(\alpha) &= \sum f(\alpha _i) \pi _ih(\alpha) \\
&=\sum f(\alpha _i) \pi _ih\left (\sum x_j\alpha _j\right ) \\
&=\sum f(\alpha _i)\pi _i(x_1,\ldots,x_n) \\
&=\sum x_if(\alpha _i).\quad^{(2)}
\end{align*}
$$
(1) $n$ is fixed, so $J_n$ is redundant. It is clear from context that we mean the unique linear combination with respect to the given basis $B$. It is also clear that the operation $x_i\alpha _i$ refers to the action of $F$ on $V$.
(2) Stylistically it is preferable to carry out long computations in display mode. It is easier on the eyes and also easier to follow. Notice also that there is no ambiguity regarding the multiplications involved.
Thus $f(\alpha)$ $= \sum_{i=1}^n x_i\cdot_F f(\alpha_i)$ $= (\sum_{i=1}^nf(\alpha_i)\cdot_Lf_i)(\alpha)$, $\forall \alpha \in V$.
Correct.
Hence $f= \sum_{i=1}^nf(\alpha_i)\cdot_Lf_i$. So $\mathrm{span}(B^*)$ $= \mathrm{span}(\{f_1,…,f_n\})=L(V,F)$.
Stay consistent with your style. You quantified the previous statement. Do so here as well. It holds that $f = \sum f(\alpha _i)f_i$ for every $f\in V^*$, thus the $f_i$ span $V^*$.
By Corollary 2 section 2.3, $\{f_1,…,f_n\}$ is basis of $L(V,F)$
I don't see how Corollary 2 supports this statement. Corollary 2 states the following. Let $V$ be an $n$-dimensional vector space. Then
- any collection of more than $n$ vectors is linearly dependent.
- no collection of less than $n$ vectors spans $V$.
It is true that for finite dimensional vector spaces $V\cong V^*$, but how do you know a priori that $\dim V^* = n$? You only know that the $f_i$ span $V^*$. You should also check the $f_i$ are linearly independent.
Let $\alpha\in V$. Since $B=\{\alpha_1,…,\alpha_n\}$ is basis of $V$, $\exists !(x_1,…,x_n)\in F^n$ such that $\alpha =\sum_{i\in J_n}x_i\cdot_V \alpha_i$. By definition of map $h:V\to F^n$, we have $h(\alpha)=(x_1,…x_n)$. So $f_i(\alpha)$ $=\pi_i(h(\alpha))$ $=\pi_i((x_1,…,x_n))=x_i$, $\forall i\in J_n$. Thus $f_i(\alpha)=x_i$, $\forall i\in J_n$. Hence $\alpha =\sum_{i\in J_n}f_i(\alpha)\cdot_V \alpha_i$.
This answers $f_i(\alpha _j) = \delta _{ij}$ (although you should make this computation explicit) and $\alpha = \sum f_i(\alpha) \alpha_i$.
Then there is a unique dual basis $B^* = \{f_1,,…, f_n\}$ for $V^*$ such that $f_i(a_j) = \delta_{ij}$
What about this part? Again, I don't see how uniqueness (immediately) follows from 3.1 Theorem 1.
Best Answer
What is $Z$?
I'm not familiar with this use of $J_m$. Although one can guess that you mean $J_m=\{1,...,m\}$ and $J_n=\{1,...,n\}$, it might be considerate to take the trouble to define $J_n$ and $J_m$ somewhere.
Theorem 1 of section 3.1 states
so you seem to have decided to choose the $\beta_i$ in the above to be $f(\alpha_1),...,f(\alpha_m),0_F,...0_F$ in the vector space consisting of the scalar field, right?
This is a reasonable argument. But since $\alpha \in W$, but $g$ operates over $V$, you might want to use the mappings to $O_F$ which we saw earlier somewhere in the above.