I like your idea of trying matrices over $\mathbb{Z}/2\mathbb{Z}$. However, you don't want to take all matrices, as you have two many idempotents, $x$ such that $x^2 = x$.
For instance $a = E_{11} = \left[\begin{smallmatrix}1&0\\0&0\end{smallmatrix}\right]$ and $b=E_{11}+E_{12} = \left[\begin{smallmatrix}1&1\\0&0\end{smallmatrix}\right]$. In general $E_{ij} \cdot E_{jk} = E_{ik}$ and if $j\neq k$ then $E_{ij} \cdot E_{k\ell} = 0$. Hence $ab=b$ but $ba=a$; of course $a^2 = a$ and $b^2=b$.
So the hint is to take a subring of the matrix ring that is not commutative but that avoids having too many idempotents. Idempotents are like partial 1s and partial 0s. What we want is for our elements to either be “1” or “0” not both. In more standard language, to avoid idempotents we make sure every element is either a unit ($x$ such that $x^{-1}$ exists) or nilpotent ($x$ such that $x^n=0$).
The very first idea along these lines doesn't quite work, but is a good idea: we take a basis with one element a unit (the identity matrix $E_{11}+E_{22}$) and the other element nilpotent ($x=E_{12}$). It is not too hard to show that this vector space is closed under multiplication, so we do get a ring. And in this ring $(ab)^2 = (ba)^2$ for all $a,b$. However, this ring consists of only $0,1,x,1+x$ and it is not hard to check it is commutative; it is $\mathbb{Z}[x]/(2,x^2)$.
You can fix the commutativity problem without too much trouble though:
Just use larger matrices. Let $R$ be the vector space with basis $1=E_{11}+E_{22}+E_{33}$, $x=E_{12}$, $y=E_{23}$, and $z=E_{13}$ over the field $\mathbb{Z}/2\mathbb{Z}$. Since $xy=z$, but $$xx=yy=zz=yx=xz=zx=yz=zy=0$$ multiplication is very easy. For $$A=\begin{bmatrix} a & b & d \\ 0 & a & c \\ 0 & 0 & a \end{bmatrix}, \qquad B = \begin{bmatrix} e & f & h \\ 0 & e & g \\ 0 & 0 & e \end{bmatrix}$$ we get $$(AB)^2 - (BA)^2 = \begin{bmatrix} 0 & 0 & 2ae(bh-df) \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}$$ which is $0$ as long as $2=0$.
Any standard example of elements such that $ba=1$ and $ab\neq 0,1$ is a counterexample for you.
The first says that $a,b$ are both one-sided invertible. Then you can deduce that $(ab)^2=ab$, so $ab$ is a nonzero idempotent. A nontrivial idempotent in a ring with identity (meaning one other than $0,1$) is never left or right invertible because it is both a left and right zero divisor: $(1-ab)ab=ab(1-ab)=0$.
Examples of such rings with $a$ and $b$ like that are scattered throughout the site, but a little hard to search for. I found one here at this related question, although you don't really need to say "bounded linear operators on $\ell^2$," you can just say "linear transformations from $V\to V$ where $V$ is a vector space with countably infinite dimension." For a fixed basis, the "right shift" $a$ and the "left shift" $b$ on the basis elements create linear transformations such that $ba=1$ and $ab\neq 0,1$.
Best Answer
Hint $ $ The inverse can be slickly derived via geometric power series as below (straightforward algebra then proves that it is indeed an inverse).
$$\begin{eqnarray} \rm (1-ba)^{-1} &=&\rm 1+ \color{#0a0}b\color{#c00}a + \color{#0a0}b\color{c00}{ab}\color{#c00}a + \color{#0a0}b\color{0a0}{abab}\color{#c00}a +\,\cdots\\ &=&\rm 1+ \color{#0a0}b\:\! (1\:\! +\, \color{c00}{ab}\ \ +\ \ \color{0a0}{abab}\,\ +\,\cdots\,)\:\!\color{#c00}a\\ &=&\rm 1+ \color{#0a0}b\:\! (1\,-\,ab)^{-1}\color{#c00}a\end{eqnarray}\qquad\qquad$$
Halmos asks why this works in a famous expository article (excerpted below). When I was a student I became interested in this. It turns out that one can give good (and rigorous) explanations. It can be proved that all such rational identities are essentially consequences of geometric power series expansions. For references see this Mathoverflow question. See also Paul Cohn, A remark on the quasi-inverse of a product, Illinois J. Math, 2003. Cohn wrote this paper in reply to my question regarding his viewpoint on this topic.
[1] Halmos, P.R. $ $ Does mathematics have elements?
Math. Intelligencer 3 (1980/81), no. 4, 147-153