Prove that the the set of linear combinations of n linearly independent vectors can only have n linearly independent vectors.

linear algebrasolution-verificationvector-spaces

I'm stuck on the following problem:


Let $V$ be a vector space and $\mathbf{a_1, a_2, \dots, a_n} \in V$, a set of vectors of which $r$ are linearly independent, where $r \le n$.
Define $\mathbf{b_1, \dots, b_m}$ as below:
$$\begin{aligned}
\mathbf{b_1} &= a_{11} \mathbf{a_1} + \cdots + a_{1n}\mathbf{a_n} \\
\mathbf{b_2} &= a_{21} \mathbf{a_1} + \cdots + a_{2n}\mathbf{a_n} \\
&\vdots \\
\mathbf{b_m} &= a_{m1} \mathbf{a_1} + \cdots + a_{mn}\mathbf{a_n} \\
\end{aligned}$$

Prove that the number of linearly independent vectors in $\mathbf{b_1, \dots, b_m}$ is at most $r$.


I understand that $\mathbf{b_1, \dots, b_m} \in Span(\mathbf{a_1, \dots, a_n})$ and therefore are in a subspace of dimension $r$. Hence, there can only be $r$ linearly independent vectors in $\{\mathbf{b_1, \dots, b_m}\}$. I'm trying not to rely on theorems and taking the following approach to proving this, but am stuck. Here's what I have so far:

Proof.

WLOG, let $\mathbf{a_1, \dots, a_r}$ be linearly independent. Then,
$$\begin{aligned}
\mathbf{a_{r+1}} &= \sum_{i=1}^r \alpha_{r+1, i} \mathbf{a_i} \\
\mathbf{a_{r+2}} &= \sum_{i=1}^r \alpha_{r+2, i} \mathbf{a_i}\\
&\vdots \\
\mathbf{a_n} &= \sum_{i=1}^r \alpha_{n, i}\mathbf{a_i}\\
\end{aligned}$$

from linear dependence.
Then,
$$\begin{aligned}
\mathbf{b_{1}} &= a_{11} \mathbf{a_1} + \cdots + a_{1r}\mathbf{a_r} + a_{1r+1}\sum_{i=1}^r \alpha_{r+1, i} \mathbf{a_i} + \cdots + a_{1n}\sum_{i=1}^r \alpha_{n, i}\mathbf{a_i}:= \lambda_{11}\mathbf{a_1} + \cdots + \lambda_{1r}\mathbf{a_r}\\
&\vdots \\
\mathbf{b_{m}} &= a_{m1} \mathbf{a_1} + \cdots + a_{mr}\mathbf{a_r} + a_{mr+1}\sum_{i=1}^r \alpha_{r+1, i} \mathbf{a_i} + \cdots + a_{mn}\sum_{i=1}^r \alpha_{n, i}\mathbf{a_i}:= \lambda_{m1}\mathbf{a_1} + \cdots + \lambda_{mr}\mathbf{a_r}\\
\end{aligned}$$

When $m \le r$, there can be at most $r$ linearly independent vectors in $\{\mathbf{b_1, \dots, b_m}\}$ since there are at most $r$ vectors to begin with.
When $m > r$, take the first $r$ vectors $\mathbf{b_1, \dots, b_r}$. Let $\begin{cases}\lambda_{ij} = 1 \text{ if } i = j\\ \lambda_{ij} = 0 \text{ if } i \ne j \end{cases}$. Then, $\mathbf{b_1 = a_1, b_2 = a_2, \cdots b_r = a_r}$ and it follows that $\mathbf{b_1, \dots, b_r}$ are linearly independent. Thus, we know that $\{\mathbf{b_1, \dots, b_m}\}$ can have $r$ linearly independent vectors.

From here, assuming that $\mathbf{b_1, \dots, b_r}$ are linearly independent, I want to show that if we introduce $\mathbf{b_{r+1}}$, we can find a nontrivial solution to
$\beta_1 \mathbf{b_1} + \cdots \beta_r \mathbf{b_r} + \beta_{r+1} \mathbf{b_{r+1}} = 0$ by using $\mathbf{b_{r+1}} = \lambda_{r+1,1}\mathbf{a_1} + \cdots + \lambda_{r+1,r}\mathbf{a_r}$, but I can't think of a clever way to show it.
If anyone could help at all, I would really appreciate it! It would be best if you could give me help to expand my solution, but I would also appreciate pointers in other directions.

Best Answer

I think I found a solution.
Picking up from where I proved that all $\mathbf{b_i}(i = 1, \dots, m)$ are linear combinations of $\mathbf{a_i}(i = 1, \dots, r)$:


Thus, we know that $\{\mathbf{b_1, \dots, b_m}\}$ are each a linear combination of $\{\mathbf{a_1, \dots, a_r}\}$.

Now, let us assume that $\{\mathbf{b_1, \dots, b_{r+1}}\}$ are linearly independent. Furthermore, since we have shown that $b_1$ is a linear combination of $\{\mathbf{a_1, \dots, a_r}\}$, we can write $$\mathbf{b_1} = \sum_{i=1}^r \lambda_i\mathbf{a_i}$$ Because $\mathbf{b_1} \in \{\mathbf{b_1, \dots, b_{r+1}}\}$ which are linearly independent, $\mathbf{b_1} \ne 0$. It follows that at least one $\lambda_i(1 \le i \le r)$ is nonzero. WLOG, suppose $\lambda_1 \ne 0$. Then, $$\mathbf{a_1} = \frac{1}{\lambda_1}\left(\mathbf{b_1} - \sum_{i=2}^r \lambda_i \mathbf{a_i}\right)$$ Which is to say, $\mathbf{a_1}$ is a linear combination of $\{\mathbf{b_1}, \mathbf{a_2}, \dots, \mathbf{a_r}\}$. Because $\mathbf{b_2}$ is a linear combination of $\{\mathbf{a_1, \dots, a_r}\}$, and $\mathbf{a_1}$ is a linear combination of $\{\mathbf{b_1}, \mathbf{a_2}, \dots, \mathbf{a_r}\}$, we can say that $\mathbf{b_2}$ is a linear combination of $\{\mathbf{b_1, a_2, \dots, a_r}\}$. Thus, we can write $$\begin{aligned} \mathbf{b_2} &= \delta_1\mathbf{b_1} + \sum_{i = 2} ^r \delta_i \mathbf{a_i} \end{aligned}$$ Because we are assuming $\mathbf{b_2}$ is among a set of linearly independent vectors, $\mathbf{b_2} \ne 0$ and $\mathbf{b_2}$ is not a linear combination of $\mathbf{b_1}$. It follows that there must exist at least one $\delta_i (2 \le i \le r)$ such that $\delta_i \ne 0$. WLOG, suppose $\delta_2 \ne 0$. Then, $$\mathbf{a_2} = \frac{1}{\delta_2} \left(\mathbf{b_2} - \delta_1\mathbf{b_1} - \sum_{i=3} ^ r \delta_i \mathbf{a_i}\right)$$ And thus $\mathbf{a_2}$ is a linear combination of $\{\mathbf{b_1, b_2, a_3, \dots, a_r}\}$.
Assume this stands for $k < r$. That is to say, $\mathbf{a_k} = \sum_{i=1} ^k \theta_i\mathbf{b_i} + \sum_{i=k+1} ^r \theta_i \mathbf{a_{i}}$. Then, $$\mathbf{b_{k+1}} = \sum_{i=1}^k \gamma_i\mathbf{b_i} + \sum_{i=k+1}^r\gamma_i \mathbf{a_i}$$ Because $\mathbf{b_k+1}$ is independent from $\mathbf{b_1 \dots b_k}$, there must exist $\gamma_i (k+ 1 \le i \le r)$ such that $\gamma_i \ne 0$. WLOG, assume $\delta_{k+1} \ne 0$. $$\mathbf{a_{k+1}} = \frac{1}{\delta_{k+1}}\left(\mathbf{b_{k+1}} - \sum_{i=1}^k \gamma_i \mathbf{b_i} - \sum_{i=k+2}^r \gamma_i\mathbf{a_i}\right)$$ Thus we have shown by induction that all $\mathbf{a_i}(i=1, \dots, r)$ can be represented as linear combinations of $\mathbf{b_i}(i=1, \dots, r)$.
Finally, we assumed that $\mathbf{b_{r+1}}$ is linearly independent from $\mathbf{b_i}(i=1, \dots, r)$. We have shown earlier that $\mathbf{b_{r+1}}$ is a linear combination of $\mathbf{a_i}(i=1, \dots, r)$. However, since $\mathbf{a_i}(i=1, \dots, r)$ can be represented as linear combinations of $\mathbf{b_i}(i=1, \dots, r)$, it follows that $\mathbf{b_{r+1}}$ is a linear combination of $\mathbf{b_i}(i=1, \dots, r)$. This goes against the assumption that $\{\mathbf{b_1, \dots, b_{r+1}}\}$ are linearly independent, and we have proven that the number of linearly independent vectors in $\{\mathbf{b_1, b_2, \dots, b_m}\}$ is at most $r$. $\Box$

Related Question