Let $\mathbf{x} = (x_{1}, x_{2}, x_{3})^T$ be the coordinates of a point in the $e$-basis and let $\mathbf{y} = (y_{1}, y_{2}, y_{3})^T$ be the coordinates of the same point in the $f$-basis.
It is the same point, so we require the following condition.
$$
x_{1} \mathbf{e}_{1} +
x_{2} \mathbf{e}_{2} +
x_{3} \mathbf{e}_{3}
=
y_{1} \mathbf{f}_{1} +
y_{2} \mathbf{f}_{2} +
y_{3} \mathbf{f}_{3}
$$
The question gives the way of writing the $f$-basis vectors in terms of the $e$-basis vectors:
$$
\begin{aligned}
\mathbf{f}_1 &= \mathbf{e}_1 + \mathbf{e}_2 \\
\mathbf{f}_2 &= \mathbf{e}_2 \\
\mathbf{f}_3 &= \mathbf{e}_1 - \mathbf{e}_3
\end{aligned}
$$
We can substutite these formulas into the equation for the coordinates above
$$
x_{1} \mathbf{e}_{1} +
x_{2} \mathbf{e}_{2} +
x_{3} \mathbf{e}_{3}
=
y_{1} (\mathbf{e}_{1} + \mathbf{e}_{2}) +
y_{2} \mathbf{e}_{2} +
y_{3} (\mathbf{e}_{1} - \mathbf{e}_{3})
$$
$$
x_{1} \mathbf{e}_{1} +
x_{2} \mathbf{e}_{2} +
x_{3} \mathbf{e}_{3}
=
y_{1} \mathbf{e}_{1} + y_{1} \mathbf{e}_{2} +
y_{2} \mathbf{e}_{2} +
y_{3} \mathbf{e}_{1} - y_{3} \mathbf{e}_{3}
$$
$$
x_{1} \mathbf{e}_{1} +
x_{2} \mathbf{e}_{2} +
x_{3} \mathbf{e}_{3}
=
(y_{1} + y_{3}) \mathbf{e}_{1}
+
(y_{1} + y_{2} ) \mathbf{e}_{2}
- y_{3} \mathbf{e}_{3}
$$
Now $\mathbf{e}_1$, $\mathbf{e}_2$ and $\mathbf{e}_3$ are three linearly independent vectors so that the components of each vector can be equated on both sides of the above. I.e. we can write:
$$
\begin{aligned}
x_1 &= y_1 + y_3 \\
x_2 & = y_1 + y_2 \\
x_3 &= -y_3
\end{aligned}
$$
The following is exactly the same as the above with slightly different spacing
$$
\begin{aligned}
x_1 &= y_1 & &+y_3 \\
x_2 & = y_1 &+ y_2 & \\
x_3 &= & &-y_3
\end{aligned}
$$
Writing the equations that relate the coordinates in this way, we can see how the set can be written as a single matrix equation:
$$
\begin{pmatrix}
x_{1} \\ x_{2} \\ x_{3}
\end{pmatrix}
=
\begin{pmatrix}
1 & 0 & 1 \\
1 & 1 & 0 \\
0 & 0 & -1
\end{pmatrix}
\begin{pmatrix}
y_{1} \\ y_{2} \\ y_{3}
\end{pmatrix}
\Rightarrow
\mathbf{x}
=
\begin{pmatrix}
1 & 0 & 1 \\
1 & 1 & 0 \\
0 & 0 & -1
\end{pmatrix}
\mathbf{y}$$
This shows how we can use a matrix to convert coordinates in the $f$-basis to coordinates in the $e$-basis, i.e. $\mathbf{x} = T \mathbf{y}$, i.e. it represents the matrix $T$ in the formula
$$
A' = T^{-1} A T
$$
where $A$ is the transformation that is applied to coordinates in the $e$-basis. The above formula is applied
Having found $T$, we can find its inverse (by hand or with some software):
$$
T^{-1}
=
\begin{pmatrix}
1& -1& 0 \\
0& 1& 0 \\
1& -1& -1
\end{pmatrix}
$$
and finally, we can calculate $A'$
$$
A'
=
\begin{pmatrix}
-3& -2& -2 \\
3& 3& -1 \\
-7& -1& -6
\end{pmatrix}
$$
which is the matrix of the transformation that is applied to coordinates in the $f$-basis.
This next part goes into how the formula relating the two transformation matrices in the different bases is derived.
If we write $\mathbf{u}$
for the result of applying $A$ to the $e$-basis vector $\mathbf{x}$ and if we write
$\mathbf{v}$ for the result of applying $A'$ to the $f$-basis vector $\mathbf{y}$.
$$
\begin{aligned}
\mathbf{u} &= A \mathbf{x} \\
\mathbf{v} &= A' \mathbf{y} \\
\end{aligned}
$$
The vectors
$\mathbf{x}$ and $\mathbf{y}$
correspond to the same point in the two different bases and so do the pair of vectors
$\mathbf{u}$ and $\mathbf{v}$. In other words they can be written:
$$
\begin{aligned}
\mathbf{x} &= T \mathbf{y} \\
\mathbf{u} &= T \mathbf{v} \\
\end{aligned}
$$
This means we can write the following
$$
\begin{aligned}
\mathbf{u} &= T \mathbf{v} \\
A \mathbf{x} &= T \mathbf{v} \\
A \mathbf{x} &= T A' \mathbf{y} \\
A T\mathbf{y} &= T A' \mathbf{y} \\
T^{-1} A T\mathbf{y} &= A' \mathbf{y} \\
\end{aligned}
$$
As this works for all $\mathbf{y}$ we can conclude that $T^{-1} A T = A' $.
Best Answer
I hate the change of basis formula. I think it confuses way too many people, and obscures the simple intuition going on behind the scenes.
Recall the definition of matrices for a linear map $T : V \to W$. If $B_1 = (v_1, \ldots, v_m)$ is a basis for $V$ and $B_2$ is a basis for $W$ (also ordered and finite), then we define $$[T]_{B_2 \leftarrow B_1} = \left([Tv_1]_{B_2} \mid [Tv_2]_{B_2} | \ldots | \, [Tv_n]_{B_2} \right),$$ where $[w]_{B_2}$ refers to the coordinate column vector of $w \in W$ with respect to the basis $B_2$. Essentially, it's the matrix you get by transforming the basis $B_1$, writing the resulting vectors in terms of $B_2$, and writing the resulting coordinate vectors as columns.
Such a matrix has the following lovely property (and is completely defined by this property):
This is what makes the matrix useful. When we compute with finite-dimensional vector spaces, we tend to store vectors in terms of their coordinate vector with respect to a basis. So, this matrix allows us to directly apply $T$ to such a coordinate vector to return a coordinate vector in terms of the basis on the codomain.
This also means that, if we also have $S : W \to U$, and $U$ has a (finite, ordered) basis $B_3$, then we have
$$[S]_{B_3 \leftarrow B_2}[T]_{B_2 \leftarrow B_1}[v]_{B_1} = [S]_{B_3 \leftarrow B_2}[Tv]_{B_2} = [STv]_{B_3},$$
and so
Note also that, if $\mathrm{id} : V \to V$ is the identity operator, then
$$[\mathrm{id}]_{B_1 \leftarrow B_1}[v]_{B_1} = [v]_{B_1},$$
which implies $[\mathrm{id}]_{B_1 \leftarrow B_1}$ is the $n \times n$ identity matrix $I_n$. Moreover, if $T$ is invertible, then $\operatorname{dim} W = n$ and then
$$I_n = [\mathrm{id}]_{B_1 \leftarrow B_1} = [T^{-1}T]_{B_1 \leftarrow B_1} = [T^{-1}]_{B_1 \leftarrow B_2}[T]_{B_2 \leftarrow B_1}.$$
Similarly,
$$I_n = [\mathrm{id}]_{B_2 \leftarrow B_2} = [TT^{-1}]_{B_1 \leftarrow B_1} = [T]_{B_2 \leftarrow B_1}[T^{-1}]_{B_1 \leftarrow B_2}.$$
What this means is
From this, we can derive the change of basis formula. If we have a linear operator $T : V \to V$ and two bases $B_1$ and $B_2$ on $V$, then
It's easy to see that, if $B_1$ is the standard basis for $V = \mathbb{F}^n$, then $[\mathrm{id}]_{B_1 \leftarrow B_2}$ is the result of putting the basis vectors in $B_2$ into columns of a matrix, and this particular case is the change of basis formula.
Now, this works for an operator on $\mathbb{F}^n$. You've got a linear map between two unspecified spaces, so this formula will not apply. But, we can definitely use the same tools. Let \begin{align*} B_1 &= (e_1, e_2, e_3) \\ B_1' &= (e_1, e_1 + e_2, e_1 + e_2 + e_3) \\ B_2 &= (f_1, f_2) \\ B_2' &= (f_1, f_1 + f_2). \end{align*} We want $[L]_{B_2' \leftarrow B_1'}$, and we know $[L]_{B_2 \leftarrow B_1}$. We compute
\begin{align*} [L]_{B_2' \leftarrow B_1'} &= [\mathrm{id} \circ L \circ \mathrm{id}]_{B_2' \leftarrow B_1'} \\ &= [\mathrm{id}]_{B_2' \leftarrow B_2} [L]_{B_2 \leftarrow B_1} [\mathrm{id}]_{B_1 \leftarrow B_1'}. \end{align*}
We know $[L]_{B_2 \leftarrow B_1}$, so we must compute the other two matrices. We have
$$[\mathrm{id}]_{B_1 \leftarrow B_1'} = \left([e_1]_{B_1} \mid [e_1 + e_2]_{B_1} | \, [e_1 + e_2 + e_3]_{B_1} \right) = \begin{pmatrix} 1 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{pmatrix}.$$
Similarly,
$$[\mathrm{id}]_{B_2 \leftarrow B_2'} = \left([f_1]_{B_2} \mid \, [f_1 + f_2]_{B_2}\right) = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix},$$
and so
$$[\mathrm{id}]_{B_2' \leftarrow B_2} = [\mathrm{id}]^{-1}_{B_2 \leftarrow B_2'} = \begin{pmatrix} 1 & -1 \\ 0 & 1 \end{pmatrix}.$$
Finally, this gives us,
$$[L]_{B_2' \leftarrow B_1'} = \begin{pmatrix} 1 & -1 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 0 & 1 & 2 \\ 3 & 4 & 5 \end{pmatrix} \begin{pmatrix} 1 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{pmatrix} = \begin{pmatrix} -3 & -6 & -9 \\ 3 & 7 & 12 \end{pmatrix}.$$