You asked for pointers to inconsistencies or inaccuracies, so I'll do that. The most serious ones are in part (a).
$(T_1+T_2)A=T_1A+T_2A$ is true, but it is not the matrix distributive property which makes it true, since matrix multiplication is not the only linear transformation, and indeed it doesn't even make sense for $V\neq F^n$ (with $F$ being $\Bbb R$ or $\Bbb C$ or a generic field, depending on your definition).
Similarly your other property is true, but it has nothing to do with matrices.
These two properties do not show directly that $V^*$ is a vector space, but rather that it is a subspace of a larger space $W$. Certainly if it is a subspace of $W$ then it is a vector space, but you never mention which larger space $W$ you are working in.
Actually, that's not strictly true; you imply that $W=V$ in your last line, which is false. For instance, if $V=\Bbb R$ and $f$ is defined by $f(x)=x$, then $f$ is not a real number, so it is not even true that $V\subseteq V^*$.
Your proof of linear independence is great. Rough around the edges, but logically sound.
Span has some issues. First, it is not true that $0$ cannot be written as $a_1f_1+\cdots+a_nf_n$; set $a_1=\cdots=a_n=0$.
Even if you add the condition that "not all $a_i$ are zero", you still must prove that $0$ is the only linear transformation that cannot be written this way. A priori, this is far from obvious.
As a more helpful comment; the usual method for proving that span$\,B = U$ is to say "Suppose $g\in U$ is an arbitrary vector" and then construct explicitly some $a_i$ such that $\sum a_i b_i=g$ (where $b_i\in B$ are basis elements).
The third part is perfect.
Here are answers to your questions:-
- Firstly, when you say scalars, $a_1, a_2, \cdots, a_n$, they are real numbers and hence can also be $0$. Keeping this in mind, suppose there is a set $S = \left\lbrace v_1, v_2, \cdots, v_n \right\rbrace \subseteq V$. Then, quite obviously, the vector $\textbf{0} \in V$ cab ve written as
$$0 \cdot v_1 + 0 \cdot v_2 + \cdots + 0 \cdot v_n = \textbf{0}$$
This is what we call the "trivial linear combination".
In fact, what confusion you have in mind is that when you say that a vector is a linear combination of other vectors, there must be at least one vector and one scalar with which you can construct your "linear combination".
- When you talk about a "set", elements cannot be repeated. So, there is no point of asking if the elements of the set are distinct.
Lastly, I do not know what book you are following, but I feel that a better version of definitions of linear dependence and independence is the following:-
Linear Independence
A finite set $S = \left\lbrace v_1, v_2, \cdots, v_n \right\rbrace \subseteq V$ is said to be linearly independent iff
$$\alpha_1 \cdot v_1 + \alpha_2 \cdot v_2 + \cdots + \alpha_n \cdot v_n = \textbf{0}$$
implies that $\alpha_1 = \alpha_2 = \cdots = \alpha_n = 0$. This actually means that the only way you can obtain the zero vector $\textbf{0}$ from a linearly "independent" set is by setting the scalars (coefficients) to be $0$, which we call the "trivial" combination.
In case of an infinite set $S \subseteq V$, it is said to be linearly independent iff every finite subset of $S$ is linearly independent. We have definition of linear independence of finite sets which can be used.
Linear Dependence
A finite set $S = \left\lbrace v_1, v_2, \cdots, v_n \right\rbrace \subseteq V$ is said to be linearly "dependent" iff it is not linearly independent. Thus, we need to negate the statement for linear independence. The negation of the statement
"$\exists \alpha_1, \alpha_2, \cdots, \alpha_n \in \mathbb{R}$ and $i \in \left\lbrace 1, 2, \cdots, n \right\rbrace$ such that $\alpha_1 \cdot v_1 + \alpha_2 \cdot v_2 + \cdots + \alpha_n \cdot v_n = \textbf{0}$ and $\alpha_i \neq 0$"
This statement means that the vector $v_i \in S$ can be actually written as a linear combination of the other vectors. In particular,
$$v_i = \left( - \dfrac{\alpha_1}{\alpha_i} \right) \cdot v_1 + \left( - \dfrac{\alpha_2}{\alpha_i} \right) \cdot v_2 + \cdots + \left( - \dfrac{\alpha_{i - 1}}{\alpha_i} \right) \cdot v_{i - 1} + \left( - \dfrac{\alpha_{i + 1}}{\alpha_i} \right) \cdot v_{i + 1} + \cdots + \left( - \dfrac{\alpha_n}{\alpha_i} \right) \cdot v_n$$
and therefore the vector $v_i \in S$ is "dependent" on the other vectors.
In fact, the linear combination $\alpha_1 \cdot v_1 + \alpha_2 \cdot v_2 + \cdots + \alpha_n \cdot v_n = \textbf{0}$ is called the "non - trivial" linear combination.
For an infinite set $S \subseteq V$, it is said to be linearly dependent iff it is not linearly independent. Again, we need to negate the statement for linear independence of infinite set. The negation of the statement would be
"There exists a finite set $A \subset S$ such that $A$ is not linearly independent". And now, we do have the definition of linear dependence (not linear independence) for finite sets which can be used.
I hope your confusion about distinct elements will be cleared by this. And if you are still confused, try forming sets which are linearly dependent and independent in $\mathbb{R}^2$ and $\mathbb{R}^3$ which you can easily visualize. Also read some material on span of a set and how we can connect linear combination and span with linear dependence and independence.
Best Answer
Use linearity of the transposition $(A+B)^t=A^t+B^t$ and $(A^t)^t=A$: if $\sum_{j=1}^ka_j A_j^t=0$ for some scalars $(a_j)_{j=1}^k$ then taking the transpose $\sum_{j=1}^ka_j (A_j^t)^t=0$. The LHS is $\sum_{j=1}^ka_j A_j$ and we conclude by the assumption on the family $(A_j)_{j=1}^k$.