$\textbf{Part a)}$ Two vectors are linearly dependent if and only if one is a scalar multiple of the other.
So in a)
$$A = \begin{pmatrix}
a & ca \\
b & cb
\end{pmatrix}$$
There are some cases: If both $a,b=0$, then $A$ is the zero matrix and that is its reduced row echelon.
If $a\neq 0$ or $b\neq 0$, then the reduced row echelon form of this matrix is:
$$\begin{pmatrix}
1 & c \\
0 & 0 \\
\end{pmatrix}_.$$
$\textbf{Part b)}$ The conditions give that $a_1,a_2,a_3$ are linearly independent. Therefore the rank of the matrix is $3$ and the rref is:
$$ \begin{pmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
0 & 0 & 0
\end{pmatrix}_.$$
I think I found a solution.
Picking up from where I proved that all $\mathbf{b_i}(i = 1, \dots, m)$ are linear combinations of $\mathbf{a_i}(i = 1, \dots, r)$:
Thus, we know that $\{\mathbf{b_1, \dots, b_m}\}$ are each a linear combination of $\{\mathbf{a_1, \dots, a_r}\}$.
Now, let us assume that $\{\mathbf{b_1, \dots, b_{r+1}}\}$ are linearly independent. Furthermore, since we have shown that $b_1$ is a linear combination of $\{\mathbf{a_1, \dots, a_r}\}$, we can write
$$\mathbf{b_1} = \sum_{i=1}^r \lambda_i\mathbf{a_i}$$
Because $\mathbf{b_1} \in \{\mathbf{b_1, \dots, b_{r+1}}\}$ which are linearly independent, $\mathbf{b_1} \ne 0$. It follows that at least one $\lambda_i(1 \le i \le r)$ is nonzero. WLOG, suppose $\lambda_1 \ne 0$. Then,
$$\mathbf{a_1} = \frac{1}{\lambda_1}\left(\mathbf{b_1} - \sum_{i=2}^r \lambda_i \mathbf{a_i}\right)$$
Which is to say, $\mathbf{a_1}$ is a linear combination of $\{\mathbf{b_1}, \mathbf{a_2}, \dots, \mathbf{a_r}\}$. Because $\mathbf{b_2}$ is a linear combination of $\{\mathbf{a_1, \dots, a_r}\}$, and $\mathbf{a_1}$ is a linear combination of $\{\mathbf{b_1}, \mathbf{a_2}, \dots, \mathbf{a_r}\}$, we can say that $\mathbf{b_2}$ is a linear combination of $\{\mathbf{b_1, a_2, \dots, a_r}\}$. Thus, we can write
$$\begin{aligned}
\mathbf{b_2} &= \delta_1\mathbf{b_1} + \sum_{i = 2} ^r \delta_i \mathbf{a_i}
\end{aligned}$$
Because we are assuming $\mathbf{b_2}$ is among a set of linearly independent vectors, $\mathbf{b_2} \ne 0$ and $\mathbf{b_2}$ is not a linear combination of $\mathbf{b_1}$. It follows that there must exist at least one $\delta_i (2 \le i \le r)$ such that $\delta_i \ne 0$. WLOG, suppose $\delta_2 \ne 0$. Then,
$$\mathbf{a_2} = \frac{1}{\delta_2} \left(\mathbf{b_2} - \delta_1\mathbf{b_1} - \sum_{i=3} ^ r \delta_i \mathbf{a_i}\right)$$
And thus $\mathbf{a_2}$ is a linear combination of $\{\mathbf{b_1, b_2, a_3, \dots, a_r}\}$.
Assume this stands for $k < r$. That is to say, $\mathbf{a_k} = \sum_{i=1} ^k \theta_i\mathbf{b_i} + \sum_{i=k+1} ^r \theta_i \mathbf{a_{i}}$. Then,
$$\mathbf{b_{k+1}} = \sum_{i=1}^k \gamma_i\mathbf{b_i} + \sum_{i=k+1}^r\gamma_i \mathbf{a_i}$$
Because $\mathbf{b_k+1}$ is independent from $\mathbf{b_1 \dots b_k}$, there must exist $\gamma_i (k+ 1 \le i \le r)$ such that $\gamma_i \ne 0$. WLOG, assume $\delta_{k+1} \ne 0$.
$$\mathbf{a_{k+1}} = \frac{1}{\delta_{k+1}}\left(\mathbf{b_{k+1}} - \sum_{i=1}^k \gamma_i \mathbf{b_i} - \sum_{i=k+2}^r \gamma_i\mathbf{a_i}\right)$$
Thus we have shown by induction that all $\mathbf{a_i}(i=1, \dots, r)$ can be represented as linear combinations of $\mathbf{b_i}(i=1, \dots, r)$.
Finally, we assumed that $\mathbf{b_{r+1}}$ is linearly independent from $\mathbf{b_i}(i=1, \dots, r)$. We have shown earlier that $\mathbf{b_{r+1}}$ is a linear combination of $\mathbf{a_i}(i=1, \dots, r)$. However, since $\mathbf{a_i}(i=1, \dots, r)$ can be represented as linear combinations of $\mathbf{b_i}(i=1, \dots, r)$, it follows that $\mathbf{b_{r+1}}$ is a linear combination of $\mathbf{b_i}(i=1, \dots, r)$. This goes against the assumption that $\{\mathbf{b_1, \dots, b_{r+1}}\}$ are linearly independent, and we have proven that the number of linearly independent vectors in $\{\mathbf{b_1, b_2, \dots, b_m}\}$ is at most $r$. $\Box$
Best Answer
Given such $A_i$, we can define a "multiplication" $\times$ on $\Bbb R^n$ by setting $ v\times w\:=(u_1,\ldots, u_n)^T$ where $u_i=v^TA_iw$. This multiplication is linear, as you can easily check: $$( v+v')\times w = v\times w+v'\times w$$ $$v\times (w+w')=v\times w+v\times w' $$ $$(\alpha v)\times w=v\times(\alpha w)=\alpha(v\times w)$$ Careful: In general $\times$ is not abelian, i.e., $$v\times w\stackrel ?=w\times v$$ is not guaranteed to hold for all $v,w$.
Super-Careful: In general $\times$ is not even associative, i.e., $$u\times(v\times w)\stackrel ?=(u\times v)\times w$$ is not guaranteed to hold for all $u,v,w$.
However, our multiplication can be inverted by a "division", that is: For $v\ne 0$, every equation of the form $$ x\times v=w$$ has a unique solution $x$ (because the $A_iv$ form a basis), and also every equation $$ v\times x=w$$ has a unique solution $x$ (to see this, you may want to show that the matrix $n$-tuple $(A_1^{-1},\ldots, A_n^{-1})$ also has the desired property).
At any rate, such a structure is called a division algebra (over the reals). You may start reading here to learn more about these. In particular, such a beast has been proven to exists only for $n=1$, $2$, $4$, or $8$, and can be constructed from the reals $\Bbb R$ themselves, from the complex numbers $\Bbb C$, from the quaternions $\Bbb H$, and from the octonions $\Bbb O$, respectively. You of course know the first two well, and may perhaps have heard about quaternions playing a role in 3d computer graphics, for example. But I guess the this theorem about division algebras is the only time you'll ever hear about octonions. :)
Specifcally, for $n=2$, we identify $\Bbb R^2$ with $\Bbb C$ in the obvious and define $A_1$, $A_2$ as multiplication with $1$ and $i$, respectively: $$A_1=\begin{pmatrix}1&0\\0&1\end{pmatrix}, \quad A_1=\begin{pmatrix}0&-1\\1&0\end{pmatrix}.$$ You may want to try to do the same for $\Bbb H$.
Remark: Note that if you take $\Bbb C$ instead of $\Bbb R$ as base field, the scenario is much simpler: Your eigenvector-based proof that odd dimensions $\ge 3$ do not work now works for all $n\ge 2$ because complex matrices always have eigenvalues.