Basically, your explanation is hard to follow because of a mixture between column and row vectors.

You should stick to a certain convention: every time you consider a vector known through its components with respect to a a basi, connect it with a *column* vector (this is the most usual convention), with notation $V$, and stick to notation $V^T$ if it is a *row* vector.

If such a convention is used, if we consider $B = (b_1, ... , b_n)$ and $D = (d_1, ... , d_n)$ (where the $b_k$ and the $d_k$ are column vectors), then, $B$ and $D$ are $n \times n$ matrices. We have the equivalence:

$$(\forall k, Sb_k=d_k) \ \Rightarrow \ SB=D \ \ (1)$$

From formula (1), the matrix you are looking for (as you say more or less) is simply

$$S=DB^{-1} \ \ (2)$$

For example, if you want the matrix $S$ sending $b_1=$$1 \choose 1$ and $b_2=$$-1 \choose 1$ to $d_1=$$0 \choose 1$ and $d_2=$$ -1 \choose 0$ resp., formula (2) gives:

$$S=\begin{pmatrix}
0 & -1\\
1 & 0
\end{pmatrix}
\begin{pmatrix}
1 & -1\\
1 & 1
\end{pmatrix}^{-1}=\frac{1}{2}\begin{pmatrix}
1 & -1\\
1 & 1
\end{pmatrix}$$

I hate the change of basis formula. I think it confuses way too many people, and obscures the simple intuition going on behind the scenes.

Recall the definition of matrices for a linear map $T : V \to W$. If $B_1 = (v_1, \ldots, v_m)$ is a basis for $V$ and $B_2$ is a basis for $W$ (also ordered and finite), then we define
$$[T]_{B_2 \leftarrow B_1} = \left([Tv_1]_{B_2} \mid [Tv_2]_{B_2} | \ldots | \, [Tv_n]_{B_2} \right),$$
where $[w]_{B_2}$ refers to the coordinate column vector of $w \in W$ with respect to the basis $B_2$. Essentially, it's the matrix you get by transforming the basis $B_1$, writing the resulting vectors in terms of $B_2$, and writing the resulting coordinate vectors as columns.

Such a matrix has the following lovely property (and is completely defined by this property):

$$[T]_{B_2 \leftarrow B_1} [v]_{B_1} = [Tv]_{B_2}.$$

This is what makes the matrix useful. When we compute with finite-dimensional vector spaces, we tend to store vectors in terms of their coordinate vector with respect to a basis. So, this matrix allows us to directly apply $T$ to such a coordinate vector to return a coordinate vector in terms of the basis on the codomain.

This also means that, if we also have $S : W \to U$, and $U$ has a (finite, ordered) basis $B_3$, then we have

$$[S]_{B_3 \leftarrow B_2}[T]_{B_2 \leftarrow B_1}[v]_{B_1} = [S]_{B_3 \leftarrow B_2}[Tv]_{B_2} = [STv]_{B_3},$$

and so

$$[ST]_{B_3 \leftarrow B_1} = [S]_{B_3 \leftarrow B_2}[T]_{B_2 \leftarrow B_1}.$$

Note also that, if $\mathrm{id} : V \to V$ is the identity operator, then

$$[\mathrm{id}]_{B_1 \leftarrow B_1}[v]_{B_1} = [v]_{B_1},$$

which implies $[\mathrm{id}]_{B_1 \leftarrow B_1}$ is the $n \times n$ identity matrix $I_n$. Moreover, if $T$ is invertible, then $\operatorname{dim} W = n$ and then

$$I_n = [\mathrm{id}]_{B_1 \leftarrow B_1} = [T^{-1}T]_{B_1 \leftarrow B_1} = [T^{-1}]_{B_1 \leftarrow B_2}[T]_{B_2 \leftarrow B_1}.$$

Similarly,

$$I_n = [\mathrm{id}]_{B_2 \leftarrow B_2} = [TT^{-1}]_{B_1 \leftarrow B_1} = [T]_{B_2 \leftarrow B_1}[T^{-1}]_{B_1 \leftarrow B_2}.$$

What this means is

$$[T]_{B_2 \leftarrow B_1}^{-1} = [T^{-1}]_{B_1 \leftarrow B_2}$$

From this, we can derive the change of basis formula. If we have a linear operator $T : V \to V$ and two bases $B_1$ and $B_2$ on $V$, then

\begin{align*}
[T]_{B_2 \leftarrow B_2} &= [\mathrm{id} \circ T \circ \mathrm{id}]_{B_2 \leftarrow B_2} \\
&= [\mathrm{id}]_{B_2 \leftarrow B_1} [T]_{B_1 \leftarrow B_1} [\mathrm{id}]_{B_1 \leftarrow B_2} \\
&= [\mathrm{id}^{-1}]_{B_2 \leftarrow B_1} [T]_{B_1 \leftarrow B_1} [\mathrm{id}]_{B_1 \leftarrow B_2} \\
&= [\mathrm{id}]^{-1}_{B_1 \leftarrow B_2} [T]_{B_1 \leftarrow B_1} [\mathrm{id}]_{B_1 \leftarrow B_2} \\
\end{align*}

It's easy to see that, if $B_1$ is the standard basis for $V = \mathbb{F}^n$, then $[\mathrm{id}]_{B_1 \leftarrow B_2}$ is the result of putting the basis vectors in $B_2$ into columns of a matrix, and this particular case is the change of basis formula.

Now, this works for an operator on $\mathbb{F}^n$. You've got a linear map between two unspecified spaces, so this formula will not apply. But, we can definitely use the same tools. Let
\begin{align*}
B_1 &= (e_1, e_2, e_3) \\
B_1' &= (e_1, e_1 + e_2, e_1 + e_2 + e_3) \\
B_2 &= (f_1, f_2) \\
B_2' &= (f_1, f_1 + f_2).
\end{align*}
We want $[L]_{B_2' \leftarrow B_1'}$, and we know $[L]_{B_2 \leftarrow B_1}$. We compute

\begin{align*}
[L]_{B_2' \leftarrow B_1'} &= [\mathrm{id} \circ L \circ \mathrm{id}]_{B_2' \leftarrow B_1'} \\
&= [\mathrm{id}]_{B_2' \leftarrow B_2} [L]_{B_2 \leftarrow B_1} [\mathrm{id}]_{B_1 \leftarrow B_1'}.
\end{align*}

We know $[L]_{B_2 \leftarrow B_1}$, so we must compute the other two matrices. We have

$$[\mathrm{id}]_{B_1 \leftarrow B_1'} = \left([e_1]_{B_1} \mid [e_1 + e_2]_{B_1} | \, [e_1 + e_2 + e_3]_{B_1} \right) = \begin{pmatrix} 1 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{pmatrix}.$$

Similarly,

$$[\mathrm{id}]_{B_2 \leftarrow B_2'} = \left([f_1]_{B_2} \mid \, [f_1 + f_2]_{B_2}\right) = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix},$$

and so

$$[\mathrm{id}]_{B_2' \leftarrow B_2} = [\mathrm{id}]^{-1}_{B_2 \leftarrow B_2'} = \begin{pmatrix} 1 & -1 \\ 0 & 1 \end{pmatrix}.$$

Finally, this gives us,

$$[L]_{B_2' \leftarrow B_1'} = \begin{pmatrix} 1 & -1 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 0 & 1 & 2 \\
3 & 4 & 5 \end{pmatrix} \begin{pmatrix} 1 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{pmatrix} = \begin{pmatrix} -3 & -6 & -9 \\
3 & 7 & 12 \end{pmatrix}.$$

## Best Answer

If I have understood your question properly, the argument should be the following. First of all $$ E = F \cdot C, $$ as you said. But then since $E$ is also a basis, there is a matrix $D$ such that $$ F = E \cdot D. $$ Therefore $$ E = F \cdot C = (E \cdot D) \cdot C = E \cdot (D \cdot C). $$ Since $E$ is a linearly independent, this yields that $D \cdot C$ is the identity matrix. You have proved that $C$ is invertible.

Clearly you can do the same starting with $F = E \cdot D = \dots = F \cdot (C \cdot D)$ to see that also $C \cdot D$ is the identity, but the above suffices anyway.