I haven’t done this in quite some time, so this solution is probably unnecessary complicated:
We identify $\mathbb{R}^{2 \times 2}$ with $\mathbb{R}^4$ via
$$
\mathbb{R}^{2 \times 2} \to \mathbb{R}^4, \,
\begin{pmatrix}
x & y \\
z & t
\end{pmatrix}
\mapsto
(x,y,z,t)^T.
$$
(So the “default basis” you used corresponds to the standard basis $(e_1, e_2, e_3, e_4)$ of $\mathbb{R}^4$.) If we understand $L$ as a linear map $\hat{L} \colon \mathbb{R}^4 \to \mathbb{R}^4$ then $\hat{L}$ is (with respect to the standard basis on both sides) given by the matrix
$$
A =
\begin{pmatrix}
1 & 1 & 0 & 1 \\
1 & 1 & 1 & 0 \\
0 & 1 & 1 & 1 \\
1 & 0 & 1 & 1
\end{pmatrix}.
$$
Also notice that the inner product on $\mathbb{R}^{2 \times 2}$ corresponds to the standard scalar product on $\mathbb{R}^4$ because
$$
\left\langle
\begin{pmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{pmatrix},
\begin{pmatrix}
b_{11} & b_{12} \\
b_{21} & b_{22}
\end{pmatrix}
\right\rangle
= a_{11} b_{11} + a_{12} b_{12} + a_{21} b_{21} + a_{22} b_{22}.
$$
(This also justifies called is the default inner product.) So to find an orthonormal basis of $\mathbb{R}^{2 \times 2}$ with respect to which $L$ is diagonal is the same as finding an orthogonal basis of $\mathbb{R}^4$ with respect to which $\hat{L}$ is represented a diagonal matrix.
There are now different ways to solve this problem. We will first calculate the eigenspaces of $\hat{L}$; because $A$ is symmetric we know that $\hat{L}$ is diagonalizable. Then we will use the following fact:
Proposition: Let $S \in \mathbb{R}^{n \times n}$ be symmetric and $x,y \in \mathbb{R}^n$ eigenvalues of $S$ to eigenvalues $\lambda \neq \mu$. Then $x$ and $y$ are orthogonal.
Proof: Notice that
\begin{align*}
\lambda \langle x,y \rangle
&= \langle \lambda x, y \rangle
= \langle Ax, y \rangle
= (Ax)^T y
= x^T A^T y
= x^T A y \\
&= \langle x, A y \rangle
= \langle x, \mu y \rangle
= \mu \langle x, y \rangle.
\end{align*}
Because $\lambda \neq \mu$ it follows that $\langle x,y \rangle = 0$.
So the eigenspaces of different eigenvalues are orthogonal to each other. Therefore we can compute for each eigenspace an orthonormal basis and them put them together to get one of $\mathbb{R}^4$; then each basis vectors will in particular be an eigenvectors $\hat{L}$.
By some lengthy calculation it can be shown that the characteristic polynomial of $A$ is given by
$$
\chi_A(t) = t^4 - 4 t^3 + 2 t^2 + 4t - 3.
$$
It is easy to guess the roots $1$ and $-1$, so we can factor $\chi_A$ and get
$$
\chi_A(t) = (t-1)^2 (t+1) (t-3).
$$
The eigenspaces can now be calculated as usual, and we find that
$$
E_1 = \langle (0,-1,0,1)^T, (-1,0,1,0)^T \rangle, \;
E_{-1} = \langle (-1,1,-1,1)^T \rangle, \;
E_3 = \langle (1,1,1,1)^T \rangle,
$$
where $E_\lambda$ denotes the eigenspace with respect to the eigenspace $\lambda$.
Next we need to find orthonormal basis for each eigenspace. We can always do this by picking some basis and then using Gram–Schmidt. But here we are pretty lucky:
We know the basis $((0,-1,0,1)^T, (-1,0,1,0)^T)$ of $E_1$. Because both basis vectors are already orthogonal to each other we only need to normalize them. So we get $b_1 = \frac{1}{\sqrt{2}}(0,-1,0,1)^T$ and $b_2 = \frac{1}{\sqrt{2}}(-1,0,1,0)^T$.
In the case of $E_{-1}$ and $E_3$ we are even luckier, as they are both one-dimensional. So here too we only need to normalize and thus get $b_3 = \frac{1}{2} (-1,1,-1,1)^T$ and $b_4 = \frac{1}{2}(1,1,1,1)^T$.
Putting these together we have now found a basis $(b_1, b_2, b_3, b_4)$ of $\mathbb{R}^4$ given by
$$
b_1 = \frac{1}{\sqrt{2}} \begin{pmatrix} 0 \\ -1 \\ 0 \\ 1 \end{pmatrix}, \;
b_2 = \frac{1}{\sqrt{2}} \begin{pmatrix} -1 \\ 0 \\ 1 \\ 0 \end{pmatrix}, \;
b_3 = \frac{1}{2} \begin{pmatrix} -1 \\ 1 \\ -1 \\ 1 \end{pmatrix}, \;
b_4 = \frac{1}{2} \begin{pmatrix} 1 \\ 1 \\ 1 \\ 1 \end{pmatrix},
$$
which is orthonormal and cosists of eigenvectors of $\hat{L}$. The corresponding $2 \times 2$ matrices are
\begin{align*}
B_1 &= \frac{1}{\sqrt{2}} \begin{pmatrix} 0 & -1 \\ 0 & 1 \end{pmatrix}, &
B_2 &= \frac{1}{\sqrt{2}} \begin{pmatrix} -1 & 0 \\ 1 & 0 \end{pmatrix}, \\
B_3 &= \frac{1}{2} \begin{pmatrix} -1 & 1 \\ -1 & 1 \end{pmatrix}, &
B_4 &= \frac{1}{2} \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}.
\end{align*}
I hate the change of basis formula. I think it confuses way too many people, and obscures the simple intuition going on behind the scenes.
Recall the definition of matrices for a linear map $T : V \to W$. If $B_1 = (v_1, \ldots, v_m)$ is a basis for $V$ and $B_2$ is a basis for $W$ (also ordered and finite), then we define
$$[T]_{B_2 \leftarrow B_1} = \left([Tv_1]_{B_2} \mid [Tv_2]_{B_2} | \ldots | \, [Tv_n]_{B_2} \right),$$
where $[w]_{B_2}$ refers to the coordinate column vector of $w \in W$ with respect to the basis $B_2$. Essentially, it's the matrix you get by transforming the basis $B_1$, writing the resulting vectors in terms of $B_2$, and writing the resulting coordinate vectors as columns.
Such a matrix has the following lovely property (and is completely defined by this property):
$$[T]_{B_2 \leftarrow B_1} [v]_{B_1} = [Tv]_{B_2}.$$
This is what makes the matrix useful. When we compute with finite-dimensional vector spaces, we tend to store vectors in terms of their coordinate vector with respect to a basis. So, this matrix allows us to directly apply $T$ to such a coordinate vector to return a coordinate vector in terms of the basis on the codomain.
This also means that, if we also have $S : W \to U$, and $U$ has a (finite, ordered) basis $B_3$, then we have
$$[S]_{B_3 \leftarrow B_2}[T]_{B_2 \leftarrow B_1}[v]_{B_1} = [S]_{B_3 \leftarrow B_2}[Tv]_{B_2} = [STv]_{B_3},$$
and so
$$[ST]_{B_3 \leftarrow B_1} = [S]_{B_3 \leftarrow B_2}[T]_{B_2 \leftarrow B_1}.$$
Note also that, if $\mathrm{id} : V \to V$ is the identity operator, then
$$[\mathrm{id}]_{B_1 \leftarrow B_1}[v]_{B_1} = [v]_{B_1},$$
which implies $[\mathrm{id}]_{B_1 \leftarrow B_1}$ is the $n \times n$ identity matrix $I_n$. Moreover, if $T$ is invertible, then $\operatorname{dim} W = n$ and then
$$I_n = [\mathrm{id}]_{B_1 \leftarrow B_1} = [T^{-1}T]_{B_1 \leftarrow B_1} = [T^{-1}]_{B_1 \leftarrow B_2}[T]_{B_2 \leftarrow B_1}.$$
Similarly,
$$I_n = [\mathrm{id}]_{B_2 \leftarrow B_2} = [TT^{-1}]_{B_1 \leftarrow B_1} = [T]_{B_2 \leftarrow B_1}[T^{-1}]_{B_1 \leftarrow B_2}.$$
What this means is
$$[T]_{B_2 \leftarrow B_1}^{-1} = [T^{-1}]_{B_1 \leftarrow B_2}$$
From this, we can derive the change of basis formula. If we have a linear operator $T : V \to V$ and two bases $B_1$ and $B_2$ on $V$, then
\begin{align*}
[T]_{B_2 \leftarrow B_2} &= [\mathrm{id} \circ T \circ \mathrm{id}]_{B_2 \leftarrow B_2} \\
&= [\mathrm{id}]_{B_2 \leftarrow B_1} [T]_{B_1 \leftarrow B_1} [\mathrm{id}]_{B_1 \leftarrow B_2} \\
&= [\mathrm{id}^{-1}]_{B_2 \leftarrow B_1} [T]_{B_1 \leftarrow B_1} [\mathrm{id}]_{B_1 \leftarrow B_2} \\
&= [\mathrm{id}]^{-1}_{B_1 \leftarrow B_2} [T]_{B_1 \leftarrow B_1} [\mathrm{id}]_{B_1 \leftarrow B_2} \\
\end{align*}
It's easy to see that, if $B_1$ is the standard basis for $V = \mathbb{F}^n$, then $[\mathrm{id}]_{B_1 \leftarrow B_2}$ is the result of putting the basis vectors in $B_2$ into columns of a matrix, and this particular case is the change of basis formula.
Now, this works for an operator on $\mathbb{F}^n$. You've got a linear map between two unspecified spaces, so this formula will not apply. But, we can definitely use the same tools. Let
\begin{align*}
B_1 &= (e_1, e_2, e_3) \\
B_1' &= (e_1, e_1 + e_2, e_1 + e_2 + e_3) \\
B_2 &= (f_1, f_2) \\
B_2' &= (f_1, f_1 + f_2).
\end{align*}
We want $[L]_{B_2' \leftarrow B_1'}$, and we know $[L]_{B_2 \leftarrow B_1}$. We compute
\begin{align*}
[L]_{B_2' \leftarrow B_1'} &= [\mathrm{id} \circ L \circ \mathrm{id}]_{B_2' \leftarrow B_1'} \\
&= [\mathrm{id}]_{B_2' \leftarrow B_2} [L]_{B_2 \leftarrow B_1} [\mathrm{id}]_{B_1 \leftarrow B_1'}.
\end{align*}
We know $[L]_{B_2 \leftarrow B_1}$, so we must compute the other two matrices. We have
$$[\mathrm{id}]_{B_1 \leftarrow B_1'} = \left([e_1]_{B_1} \mid [e_1 + e_2]_{B_1} | \, [e_1 + e_2 + e_3]_{B_1} \right) = \begin{pmatrix} 1 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{pmatrix}.$$
Similarly,
$$[\mathrm{id}]_{B_2 \leftarrow B_2'} = \left([f_1]_{B_2} \mid \, [f_1 + f_2]_{B_2}\right) = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix},$$
and so
$$[\mathrm{id}]_{B_2' \leftarrow B_2} = [\mathrm{id}]^{-1}_{B_2 \leftarrow B_2'} = \begin{pmatrix} 1 & -1 \\ 0 & 1 \end{pmatrix}.$$
Finally, this gives us,
$$[L]_{B_2' \leftarrow B_1'} = \begin{pmatrix} 1 & -1 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 0 & 1 & 2 \\
3 & 4 & 5 \end{pmatrix} \begin{pmatrix} 1 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{pmatrix} = \begin{pmatrix} -3 & -6 & -9 \\
3 & 7 & 12 \end{pmatrix}.$$
Best Answer
(a) The matricial equation of the operator $T$ in $B$ is $$Y=AX,\quad X=\begin{pmatrix}x_1\\ \vdots\\{x_n}\end{pmatrix}\text { coordinates of }x\text{ in }B,\;Y=\begin{pmatrix}y_1\\ \vdots\\{y_n}\end{pmatrix}\text { coordinates of }f(x)\text{ in }B.$$
(b) The change of basis matrix from $B=(e_2, e_1, e_3, e_4)$ to $$B'=(e_1, e_1+e_2, e_1+e_2+e_3, e_1+e_2+e_3+e_4)$$ is (transposing coefficients)$$P=\begin{pmatrix}{1}&{1}&{1}&1\\{0}&{1}&{1}&1\\{0}&{0}&{1}&1\\{0}&{0}&{0}&1\end{pmatrix}$$ (c) According to a well known theorem, the matricial equation of the operator $T$ in $B'$ is
$$Y'=\left(P^{-1}AP\right)X',\quad X'=\begin{pmatrix}x'_1\\ \vdots\\{x'_n}\end{pmatrix}\text { coordinates of }x\text{ in }B',\;Y'=\begin{pmatrix}y'_1\\ \vdots\\{y'_n}\end{pmatrix}\text { coordinates of }f(x)\text{ in }B'.$$