I haven’t done this in quite some time, so this solution is probably unnecessary complicated:
We identify $\mathbb{R}^{2 \times 2}$ with $\mathbb{R}^4$ via
$$
\mathbb{R}^{2 \times 2} \to \mathbb{R}^4, \,
\begin{pmatrix}
x & y \\
z & t
\end{pmatrix}
\mapsto
(x,y,z,t)^T.
$$
(So the “default basis” you used corresponds to the standard basis $(e_1, e_2, e_3, e_4)$ of $\mathbb{R}^4$.) If we understand $L$ as a linear map $\hat{L} \colon \mathbb{R}^4 \to \mathbb{R}^4$ then $\hat{L}$ is (with respect to the standard basis on both sides) given by the matrix
$$
A =
\begin{pmatrix}
1 & 1 & 0 & 1 \\
1 & 1 & 1 & 0 \\
0 & 1 & 1 & 1 \\
1 & 0 & 1 & 1
\end{pmatrix}.
$$
Also notice that the inner product on $\mathbb{R}^{2 \times 2}$ corresponds to the standard scalar product on $\mathbb{R}^4$ because
$$
\left\langle
\begin{pmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{pmatrix},
\begin{pmatrix}
b_{11} & b_{12} \\
b_{21} & b_{22}
\end{pmatrix}
\right\rangle
= a_{11} b_{11} + a_{12} b_{12} + a_{21} b_{21} + a_{22} b_{22}.
$$
(This also justifies called is the default inner product.) So to find an orthonormal basis of $\mathbb{R}^{2 \times 2}$ with respect to which $L$ is diagonal is the same as finding an orthogonal basis of $\mathbb{R}^4$ with respect to which $\hat{L}$ is represented a diagonal matrix.
There are now different ways to solve this problem. We will first calculate the eigenspaces of $\hat{L}$; because $A$ is symmetric we know that $\hat{L}$ is diagonalizable. Then we will use the following fact:
Proposition: Let $S \in \mathbb{R}^{n \times n}$ be symmetric and $x,y \in \mathbb{R}^n$ eigenvalues of $S$ to eigenvalues $\lambda \neq \mu$. Then $x$ and $y$ are orthogonal.
Proof: Notice that
\begin{align*}
\lambda \langle x,y \rangle
&= \langle \lambda x, y \rangle
= \langle Ax, y \rangle
= (Ax)^T y
= x^T A^T y
= x^T A y \\
&= \langle x, A y \rangle
= \langle x, \mu y \rangle
= \mu \langle x, y \rangle.
\end{align*}
Because $\lambda \neq \mu$ it follows that $\langle x,y \rangle = 0$.
So the eigenspaces of different eigenvalues are orthogonal to each other. Therefore we can compute for each eigenspace an orthonormal basis and them put them together to get one of $\mathbb{R}^4$; then each basis vectors will in particular be an eigenvectors $\hat{L}$.
By some lengthy calculation it can be shown that the characteristic polynomial of $A$ is given by
$$
\chi_A(t) = t^4 - 4 t^3 + 2 t^2 + 4t - 3.
$$
It is easy to guess the roots $1$ and $-1$, so we can factor $\chi_A$ and get
$$
\chi_A(t) = (t-1)^2 (t+1) (t-3).
$$
The eigenspaces can now be calculated as usual, and we find that
$$
E_1 = \langle (0,-1,0,1)^T, (-1,0,1,0)^T \rangle, \;
E_{-1} = \langle (-1,1,-1,1)^T \rangle, \;
E_3 = \langle (1,1,1,1)^T \rangle,
$$
where $E_\lambda$ denotes the eigenspace with respect to the eigenspace $\lambda$.
Next we need to find orthonormal basis for each eigenspace. We can always do this by picking some basis and then using Gram–Schmidt. But here we are pretty lucky:
We know the basis $((0,-1,0,1)^T, (-1,0,1,0)^T)$ of $E_1$. Because both basis vectors are already orthogonal to each other we only need to normalize them. So we get $b_1 = \frac{1}{\sqrt{2}}(0,-1,0,1)^T$ and $b_2 = \frac{1}{\sqrt{2}}(-1,0,1,0)^T$.
In the case of $E_{-1}$ and $E_3$ we are even luckier, as they are both one-dimensional. So here too we only need to normalize and thus get $b_3 = \frac{1}{2} (-1,1,-1,1)^T$ and $b_4 = \frac{1}{2}(1,1,1,1)^T$.
Putting these together we have now found a basis $(b_1, b_2, b_3, b_4)$ of $\mathbb{R}^4$ given by
$$
b_1 = \frac{1}{\sqrt{2}} \begin{pmatrix} 0 \\ -1 \\ 0 \\ 1 \end{pmatrix}, \;
b_2 = \frac{1}{\sqrt{2}} \begin{pmatrix} -1 \\ 0 \\ 1 \\ 0 \end{pmatrix}, \;
b_3 = \frac{1}{2} \begin{pmatrix} -1 \\ 1 \\ -1 \\ 1 \end{pmatrix}, \;
b_4 = \frac{1}{2} \begin{pmatrix} 1 \\ 1 \\ 1 \\ 1 \end{pmatrix},
$$
which is orthonormal and cosists of eigenvectors of $\hat{L}$. The corresponding $2 \times 2$ matrices are
\begin{align*}
B_1 &= \frac{1}{\sqrt{2}} \begin{pmatrix} 0 & -1 \\ 0 & 1 \end{pmatrix}, &
B_2 &= \frac{1}{\sqrt{2}} \begin{pmatrix} -1 & 0 \\ 1 & 0 \end{pmatrix}, \\
B_3 &= \frac{1}{2} \begin{pmatrix} -1 & 1 \\ -1 & 1 \end{pmatrix}, &
B_4 &= \frac{1}{2} \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}.
\end{align*}
Best Answer
Let $\mathbf{x} = (x_{1}, x_{2}, x_{3})^T$ be the coordinates of a point in the $e$-basis and let $\mathbf{y} = (y_{1}, y_{2}, y_{3})^T$ be the coordinates of the same point in the $f$-basis.
It is the same point, so we require the following condition. $$ x_{1} \mathbf{e}_{1} + x_{2} \mathbf{e}_{2} + x_{3} \mathbf{e}_{3} = y_{1} \mathbf{f}_{1} + y_{2} \mathbf{f}_{2} + y_{3} \mathbf{f}_{3} $$
The question gives the way of writing the $f$-basis vectors in terms of the $e$-basis vectors: $$ \begin{aligned} \mathbf{f}_1 &= \mathbf{e}_1 + \mathbf{e}_2 \\ \mathbf{f}_2 &= \mathbf{e}_2 \\ \mathbf{f}_3 &= \mathbf{e}_1 - \mathbf{e}_3 \end{aligned} $$
We can substutite these formulas into the equation for the coordinates above
$$ x_{1} \mathbf{e}_{1} + x_{2} \mathbf{e}_{2} + x_{3} \mathbf{e}_{3} = y_{1} (\mathbf{e}_{1} + \mathbf{e}_{2}) + y_{2} \mathbf{e}_{2} + y_{3} (\mathbf{e}_{1} - \mathbf{e}_{3}) $$
$$ x_{1} \mathbf{e}_{1} + x_{2} \mathbf{e}_{2} + x_{3} \mathbf{e}_{3} = y_{1} \mathbf{e}_{1} + y_{1} \mathbf{e}_{2} + y_{2} \mathbf{e}_{2} + y_{3} \mathbf{e}_{1} - y_{3} \mathbf{e}_{3} $$
$$ x_{1} \mathbf{e}_{1} + x_{2} \mathbf{e}_{2} + x_{3} \mathbf{e}_{3} = (y_{1} + y_{3}) \mathbf{e}_{1} + (y_{1} + y_{2} ) \mathbf{e}_{2} - y_{3} \mathbf{e}_{3} $$
Now $\mathbf{e}_1$, $\mathbf{e}_2$ and $\mathbf{e}_3$ are three linearly independent vectors so that the components of each vector can be equated on both sides of the above. I.e. we can write:
$$ \begin{aligned} x_1 &= y_1 + y_3 \\ x_2 & = y_1 + y_2 \\ x_3 &= -y_3 \end{aligned} $$
The following is exactly the same as the above with slightly different spacing $$ \begin{aligned} x_1 &= y_1 & &+y_3 \\ x_2 & = y_1 &+ y_2 & \\ x_3 &= & &-y_3 \end{aligned} $$
Writing the equations that relate the coordinates in this way, we can see how the set can be written as a single matrix equation:
$$ \begin{pmatrix} x_{1} \\ x_{2} \\ x_{3} \end{pmatrix} = \begin{pmatrix} 1 & 0 & 1 \\ 1 & 1 & 0 \\ 0 & 0 & -1 \end{pmatrix} \begin{pmatrix} y_{1} \\ y_{2} \\ y_{3} \end{pmatrix} \Rightarrow \mathbf{x} = \begin{pmatrix} 1 & 0 & 1 \\ 1 & 1 & 0 \\ 0 & 0 & -1 \end{pmatrix} \mathbf{y}$$
This shows how we can use a matrix to convert coordinates in the $f$-basis to coordinates in the $e$-basis, i.e. $\mathbf{x} = T \mathbf{y}$, i.e. it represents the matrix $T$ in the formula
$$ A' = T^{-1} A T $$
where $A$ is the transformation that is applied to coordinates in the $e$-basis. The above formula is applied
Having found $T$, we can find its inverse (by hand or with some software): $$ T^{-1} = \begin{pmatrix} 1& -1& 0 \\ 0& 1& 0 \\ 1& -1& -1 \end{pmatrix} $$
and finally, we can calculate $A'$
$$ A' = \begin{pmatrix} -3& -2& -2 \\ 3& 3& -1 \\ -7& -1& -6 \end{pmatrix} $$ which is the matrix of the transformation that is applied to coordinates in the $f$-basis.
This next part goes into how the formula relating the two transformation matrices in the different bases is derived.
If we write $\mathbf{u}$ for the result of applying $A$ to the $e$-basis vector $\mathbf{x}$ and if we write $\mathbf{v}$ for the result of applying $A'$ to the $f$-basis vector $\mathbf{y}$.
$$ \begin{aligned} \mathbf{u} &= A \mathbf{x} \\ \mathbf{v} &= A' \mathbf{y} \\ \end{aligned} $$
The vectors $\mathbf{x}$ and $\mathbf{y}$ correspond to the same point in the two different bases and so do the pair of vectors $\mathbf{u}$ and $\mathbf{v}$. In other words they can be written: $$ \begin{aligned} \mathbf{x} &= T \mathbf{y} \\ \mathbf{u} &= T \mathbf{v} \\ \end{aligned} $$
This means we can write the following $$ \begin{aligned} \mathbf{u} &= T \mathbf{v} \\ A \mathbf{x} &= T \mathbf{v} \\ A \mathbf{x} &= T A' \mathbf{y} \\ A T\mathbf{y} &= T A' \mathbf{y} \\ T^{-1} A T\mathbf{y} &= A' \mathbf{y} \\ \end{aligned} $$ As this works for all $\mathbf{y}$ we can conclude that $T^{-1} A T = A' $.