I always thought that if the determinant of a matrix $A$ is $0$ then it has no inverse, $(A^{-1})$, until I saw an exercise in Contemporary Abstract Algebra by Gallian. This asks me to prove that the set of the $2\times2$ matrices of the form
$$\begin{bmatrix}
a&a\\
a&a\\
\end{bmatrix}\,,$$
where $a \neq 0$ and $a \in \mathbb R$.
is a group under matrix multiplication. The determinant of the above set of matrices is $0$, but still the inverse exists for each matrix which is nothing but
$$\begin{bmatrix}
\frac{a}{2} & \frac{a}{2} \\
\frac{a}{2} & \frac{a}{2} \\
\end{bmatrix}\,.$$
How is this possible? What makes these type of matrices escape from satisfying determinants? What's the logic behind that?
Best Answer
$\newcommand{\R}{\mathbb{R}}$To pick up from the comment of @NaN, look for an element $$ E = \begin{bmatrix} e & e\\ e & e\\ \end{bmatrix} $$ such that for each $a \ne 0$ we have $$ \begin{bmatrix} a & a\\ a & a\\ \end{bmatrix} \cdot E = \begin{bmatrix} a & a\\ a & a\\ \end{bmatrix} $$ And then you'll find that the inverse is not quite the one you wrote...
Hint 3 below explains the logic behind this exercise.
Hint 1
Hint 2
Hint 3