I haven’t done this in quite some time, so this solution is probably unnecessary complicated:
We identify $\mathbb{R}^{2 \times 2}$ with $\mathbb{R}^4$ via
$$
\mathbb{R}^{2 \times 2} \to \mathbb{R}^4, \,
\begin{pmatrix}
x & y \\
z & t
\end{pmatrix}
\mapsto
(x,y,z,t)^T.
$$
(So the “default basis” you used corresponds to the standard basis $(e_1, e_2, e_3, e_4)$ of $\mathbb{R}^4$.) If we understand $L$ as a linear map $\hat{L} \colon \mathbb{R}^4 \to \mathbb{R}^4$ then $\hat{L}$ is (with respect to the standard basis on both sides) given by the matrix
$$
A =
\begin{pmatrix}
1 & 1 & 0 & 1 \\
1 & 1 & 1 & 0 \\
0 & 1 & 1 & 1 \\
1 & 0 & 1 & 1
\end{pmatrix}.
$$
Also notice that the inner product on $\mathbb{R}^{2 \times 2}$ corresponds to the standard scalar product on $\mathbb{R}^4$ because
$$
\left\langle
\begin{pmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{pmatrix},
\begin{pmatrix}
b_{11} & b_{12} \\
b_{21} & b_{22}
\end{pmatrix}
\right\rangle
= a_{11} b_{11} + a_{12} b_{12} + a_{21} b_{21} + a_{22} b_{22}.
$$
(This also justifies called is the default inner product.) So to find an orthonormal basis of $\mathbb{R}^{2 \times 2}$ with respect to which $L$ is diagonal is the same as finding an orthogonal basis of $\mathbb{R}^4$ with respect to which $\hat{L}$ is represented a diagonal matrix.
There are now different ways to solve this problem. We will first calculate the eigenspaces of $\hat{L}$; because $A$ is symmetric we know that $\hat{L}$ is diagonalizable. Then we will use the following fact:
Proposition: Let $S \in \mathbb{R}^{n \times n}$ be symmetric and $x,y \in \mathbb{R}^n$ eigenvalues of $S$ to eigenvalues $\lambda \neq \mu$. Then $x$ and $y$ are orthogonal.
Proof: Notice that
\begin{align*}
\lambda \langle x,y \rangle
&= \langle \lambda x, y \rangle
= \langle Ax, y \rangle
= (Ax)^T y
= x^T A^T y
= x^T A y \\
&= \langle x, A y \rangle
= \langle x, \mu y \rangle
= \mu \langle x, y \rangle.
\end{align*}
Because $\lambda \neq \mu$ it follows that $\langle x,y \rangle = 0$.
So the eigenspaces of different eigenvalues are orthogonal to each other. Therefore we can compute for each eigenspace an orthonormal basis and them put them together to get one of $\mathbb{R}^4$; then each basis vectors will in particular be an eigenvectors $\hat{L}$.
By some lengthy calculation it can be shown that the characteristic polynomial of $A$ is given by
$$
\chi_A(t) = t^4 - 4 t^3 + 2 t^2 + 4t - 3.
$$
It is easy to guess the roots $1$ and $-1$, so we can factor $\chi_A$ and get
$$
\chi_A(t) = (t-1)^2 (t+1) (t-3).
$$
The eigenspaces can now be calculated as usual, and we find that
$$
E_1 = \langle (0,-1,0,1)^T, (-1,0,1,0)^T \rangle, \;
E_{-1} = \langle (-1,1,-1,1)^T \rangle, \;
E_3 = \langle (1,1,1,1)^T \rangle,
$$
where $E_\lambda$ denotes the eigenspace with respect to the eigenspace $\lambda$.
Next we need to find orthonormal basis for each eigenspace. We can always do this by picking some basis and then using Gram–Schmidt. But here we are pretty lucky:
We know the basis $((0,-1,0,1)^T, (-1,0,1,0)^T)$ of $E_1$. Because both basis vectors are already orthogonal to each other we only need to normalize them. So we get $b_1 = \frac{1}{\sqrt{2}}(0,-1,0,1)^T$ and $b_2 = \frac{1}{\sqrt{2}}(-1,0,1,0)^T$.
In the case of $E_{-1}$ and $E_3$ we are even luckier, as they are both one-dimensional. So here too we only need to normalize and thus get $b_3 = \frac{1}{2} (-1,1,-1,1)^T$ and $b_4 = \frac{1}{2}(1,1,1,1)^T$.
Putting these together we have now found a basis $(b_1, b_2, b_3, b_4)$ of $\mathbb{R}^4$ given by
$$
b_1 = \frac{1}{\sqrt{2}} \begin{pmatrix} 0 \\ -1 \\ 0 \\ 1 \end{pmatrix}, \;
b_2 = \frac{1}{\sqrt{2}} \begin{pmatrix} -1 \\ 0 \\ 1 \\ 0 \end{pmatrix}, \;
b_3 = \frac{1}{2} \begin{pmatrix} -1 \\ 1 \\ -1 \\ 1 \end{pmatrix}, \;
b_4 = \frac{1}{2} \begin{pmatrix} 1 \\ 1 \\ 1 \\ 1 \end{pmatrix},
$$
which is orthonormal and cosists of eigenvectors of $\hat{L}$. The corresponding $2 \times 2$ matrices are
\begin{align*}
B_1 &= \frac{1}{\sqrt{2}} \begin{pmatrix} 0 & -1 \\ 0 & 1 \end{pmatrix}, &
B_2 &= \frac{1}{\sqrt{2}} \begin{pmatrix} -1 & 0 \\ 1 & 0 \end{pmatrix}, \\
B_3 &= \frac{1}{2} \begin{pmatrix} -1 & 1 \\ -1 & 1 \end{pmatrix}, &
B_4 &= \frac{1}{2} \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}.
\end{align*}
Use the dual basis or (equivalently) the dot product to identify $(\Bbb{R}^4)^*$ with $\Bbb{R}^4$. Then the annihilator of a subspace is its usual orthogonal complement. Thus we can find a basis for the orthogonal complement by Gram-Schmidt. Anyway, that gives us an algorithm, but it's a bit tedious, so let's take a different route.
If you row reduce further you get
$$
\newcommand\bmat{\begin{pmatrix}}\newcommand\emat{\end{pmatrix}}
\bmat
1 & 0 & -1 & -2 \\
0 & 1 & 2 & 3 \\
\emat
$$
This means that to get an orthogonal vector, we can choose the last two coordinates freely and pick the first two such that we get something orthogonal. I.e., if our new vector is $\bmat a & b & c & d\emat$, then we need $a = c+2d$ and $b=-2c -3d$. Similarly, since the fourth vector doesn't need to be orthogonal to the third, merely linearly independent from it and orthogonal to the last two, we can also choose the last two coordinates freely (as long as they're linearly independent from the last two coordinates of the third).
What this boils down to is that we can choose the last two of the third vector to be $(1,0)$ and the last two of the fourth to be $(0,1)$, giving the final matrix
$$
\bmat
1 & 0 & -1 & -2 \\
0 & 1 & 2 & 3 \\
1 & -2 & 1 & 0 \\
2 & -3 & 0 & 1 \\
\emat
$$
Thus a basis for the annihilator of the subspace is $\{f_1-2f_2+f_3,2f_1-3f_2+f_4\}$ given in terms of the dual basis.
Best Answer
The range of $g$ is the span of $e_1,e_2,e_4$, hence its kernel is one dimensional. Find a nonzero element $v$ of the kernel, that will generate it.
Now, $g\circ f\, (x)=0$ iff $f(x)\in\ker g$, so the range (=column space) of $f$ must be contained in ${\rm span}(v)$, and you can obtain a basis $(f_i)$ of $A$ by putting $v$ in the $i$th column and $0$ otherwise in (the standard matrix of) $f_i$.