I hate the change of basis formula. I think it confuses way too many people, and obscures the simple intuition going on behind the scenes.
Recall the definition of matrices for a linear map $T : V \to W$. If $B_1 = (v_1, \ldots, v_m)$ is a basis for $V$ and $B_2$ is a basis for $W$ (also ordered and finite), then we define
$$[T]_{B_2 \leftarrow B_1} = \left([Tv_1]_{B_2} \mid [Tv_2]_{B_2} | \ldots | \, [Tv_n]_{B_2} \right),$$
where $[w]_{B_2}$ refers to the coordinate column vector of $w \in W$ with respect to the basis $B_2$. Essentially, it's the matrix you get by transforming the basis $B_1$, writing the resulting vectors in terms of $B_2$, and writing the resulting coordinate vectors as columns.
Such a matrix has the following lovely property (and is completely defined by this property):
$$[T]_{B_2 \leftarrow B_1} [v]_{B_1} = [Tv]_{B_2}.$$
This is what makes the matrix useful. When we compute with finite-dimensional vector spaces, we tend to store vectors in terms of their coordinate vector with respect to a basis. So, this matrix allows us to directly apply $T$ to such a coordinate vector to return a coordinate vector in terms of the basis on the codomain.
This also means that, if we also have $S : W \to U$, and $U$ has a (finite, ordered) basis $B_3$, then we have
$$[S]_{B_3 \leftarrow B_2}[T]_{B_2 \leftarrow B_1}[v]_{B_1} = [S]_{B_3 \leftarrow B_2}[Tv]_{B_2} = [STv]_{B_3},$$
and so
$$[ST]_{B_3 \leftarrow B_1} = [S]_{B_3 \leftarrow B_2}[T]_{B_2 \leftarrow B_1}.$$
Note also that, if $\mathrm{id} : V \to V$ is the identity operator, then
$$[\mathrm{id}]_{B_1 \leftarrow B_1}[v]_{B_1} = [v]_{B_1},$$
which implies $[\mathrm{id}]_{B_1 \leftarrow B_1}$ is the $n \times n$ identity matrix $I_n$. Moreover, if $T$ is invertible, then $\operatorname{dim} W = n$ and then
$$I_n = [\mathrm{id}]_{B_1 \leftarrow B_1} = [T^{-1}T]_{B_1 \leftarrow B_1} = [T^{-1}]_{B_1 \leftarrow B_2}[T]_{B_2 \leftarrow B_1}.$$
Similarly,
$$I_n = [\mathrm{id}]_{B_2 \leftarrow B_2} = [TT^{-1}]_{B_1 \leftarrow B_1} = [T]_{B_2 \leftarrow B_1}[T^{-1}]_{B_1 \leftarrow B_2}.$$
What this means is
$$[T]_{B_2 \leftarrow B_1}^{-1} = [T^{-1}]_{B_1 \leftarrow B_2}$$
From this, we can derive the change of basis formula. If we have a linear operator $T : V \to V$ and two bases $B_1$ and $B_2$ on $V$, then
\begin{align*}
[T]_{B_2 \leftarrow B_2} &= [\mathrm{id} \circ T \circ \mathrm{id}]_{B_2 \leftarrow B_2} \\
&= [\mathrm{id}]_{B_2 \leftarrow B_1} [T]_{B_1 \leftarrow B_1} [\mathrm{id}]_{B_1 \leftarrow B_2} \\
&= [\mathrm{id}^{-1}]_{B_2 \leftarrow B_1} [T]_{B_1 \leftarrow B_1} [\mathrm{id}]_{B_1 \leftarrow B_2} \\
&= [\mathrm{id}]^{-1}_{B_1 \leftarrow B_2} [T]_{B_1 \leftarrow B_1} [\mathrm{id}]_{B_1 \leftarrow B_2} \\
\end{align*}
It's easy to see that, if $B_1$ is the standard basis for $V = \mathbb{F}^n$, then $[\mathrm{id}]_{B_1 \leftarrow B_2}$ is the result of putting the basis vectors in $B_2$ into columns of a matrix, and this particular case is the change of basis formula.
Now, this works for an operator on $\mathbb{F}^n$. You've got a linear map between two unspecified spaces, so this formula will not apply. But, we can definitely use the same tools. Let
\begin{align*}
B_1 &= (e_1, e_2, e_3) \\
B_1' &= (e_1, e_1 + e_2, e_1 + e_2 + e_3) \\
B_2 &= (f_1, f_2) \\
B_2' &= (f_1, f_1 + f_2).
\end{align*}
We want $[L]_{B_2' \leftarrow B_1'}$, and we know $[L]_{B_2 \leftarrow B_1}$. We compute
\begin{align*}
[L]_{B_2' \leftarrow B_1'} &= [\mathrm{id} \circ L \circ \mathrm{id}]_{B_2' \leftarrow B_1'} \\
&= [\mathrm{id}]_{B_2' \leftarrow B_2} [L]_{B_2 \leftarrow B_1} [\mathrm{id}]_{B_1 \leftarrow B_1'}.
\end{align*}
We know $[L]_{B_2 \leftarrow B_1}$, so we must compute the other two matrices. We have
$$[\mathrm{id}]_{B_1 \leftarrow B_1'} = \left([e_1]_{B_1} \mid [e_1 + e_2]_{B_1} | \, [e_1 + e_2 + e_3]_{B_1} \right) = \begin{pmatrix} 1 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{pmatrix}.$$
Similarly,
$$[\mathrm{id}]_{B_2 \leftarrow B_2'} = \left([f_1]_{B_2} \mid \, [f_1 + f_2]_{B_2}\right) = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix},$$
and so
$$[\mathrm{id}]_{B_2' \leftarrow B_2} = [\mathrm{id}]^{-1}_{B_2 \leftarrow B_2'} = \begin{pmatrix} 1 & -1 \\ 0 & 1 \end{pmatrix}.$$
Finally, this gives us,
$$[L]_{B_2' \leftarrow B_1'} = \begin{pmatrix} 1 & -1 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 0 & 1 & 2 \\
3 & 4 & 5 \end{pmatrix} \begin{pmatrix} 1 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{pmatrix} = \begin{pmatrix} -3 & -6 & -9 \\
3 & 7 & 12 \end{pmatrix}.$$
Best Answer
Let us write \begin{eqnarray*} u_1&=&e_1+e_2+e_3\\ u_2&=&2e_1\\ u_3&=&3e_2 \end{eqnarray*} so we can associate the matrix $$B=\left( \begin{array}{ccc} 1&2&0\\ 1&0&3\\ 1&0&0\\ \end{array} \right).$$ That $B$ has determinant different from zero is due that those $u_i$ are linearly independent and they form a new basis. Then $B^{-1}$ exists, and in fact obeys $B^{-1}B=1\!\!1$, that is $$ \left( \begin{array}{ccc} 0&0&1\\ \frac{1}{2}&0&-\frac{1}{2}\\ 0&\frac{1}{3}&-\frac{1}{3}\\ \end{array} \right) \left( \begin{array}{ccc} 1&2&0\\ 1&0&3\\ 1&0&0\\ \end{array} \right)= \left( \begin{array}{ccc} 1&0&0\\ 0&1&0\\ 0&0&1\\ \end{array} \right).$$ Here, you can see how the rows of $B^{-1}$ behave as the duals $u^{*s}$ i.e. $$u^{*i}(u_j)=\delta^i_j.$$ as well as the canonical basis $e^{*i}(e_j)=\delta^i_j$. Then \begin{eqnarray*} u^{*1}&=&e^{*3}\\ u^{*2}&=&\frac{1}{2}e^{*1}-\frac{1}{2}e^{*3}\\ u^{*3}&=&\frac{1}{3}e^{*2}-\frac{1}{3}e^{*3} \end{eqnarray*} that immediately gives \begin{eqnarray*} e^{*1}&=&u^{*1}+2u^{*2}\\ e^{*2}&=&u^{*1}+3u^{*3}\\ e^{*3}&=&u^{*1}. \end{eqnarray*} Finally, subbing into you covector $w=2e^{*2}+3e^{*3}$, you'll get \begin{eqnarray*} w&=&2(u^{*1}+3u^{*3})+3u^{*1},\\ &=&5u^{*1}+6u^{*3}. \end{eqnarray*}