My previous answer to this question was the "quick and easy" one, merely showing the existence of a matrix $A$ with
$A^2 = -4, \tag 1$
as per our OP user485215's request, without going into the deeper theory, other than to note that the matrix
$J = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} \tag 2$
satisfies
$J^2 = -I, \tag 3$
and may be used to build $A$ as follows:
$A = 2J \tag 4$
so in a sense $J$ corresponds to the imaginary unit $i$ (or $-i$ in the light of the comments of zwim to my other answer). But there are many other solutions for $J$, and hence for $A$; my intention here, in the spirit of the MMM's answer, is to paremetrically present all such matrices $J$. This is in fact a favorite topic of mine; I have been thinking about it and related matters for a long time, so I am glad to have a chance to post my remarks here.
Suppose we look for the most general $J$ satisfying (3); we may write
$J = \begin{bmatrix} j_{11} & j_{12} \\ j_{21} & j_{22} \end{bmatrix}, \tag 5$
with the $j_{kl} \in \Bbb R$, so that
$J^2 = \begin{bmatrix} j_{11} & j_{12} \\ j_{21} & j_{22} \end{bmatrix} \begin{bmatrix} j_{11} & j_{12} \\ j_{21} & j_{22} \end{bmatrix} = \begin{bmatrix} j_{11}^2 + j_{12}j_{21} & j_{11}j_{12} + j_{12} j_{22} \\ j_{21}j_{11} + j_{22}j_{21} & j_{21}j_{12} + j_{22}^2\end{bmatrix}$
$= \begin{bmatrix} j_{11}^2 + j_{12}j_{21} & (j_{11} + j_{22})j_{12} \\ (j_{11} + j_{22})j_{21} & j_{21}j_{12} + j_{22}^2\end{bmatrix} = \begin{bmatrix} -1 & 0 \\ 0 & -1 \end{bmatrix}, \tag 6$
from which we extract the four equations for the $j_{kl}$:
$j_{11}^2 + j_{12}j_{21} = -1, \tag 7$
$ (j_{11} + j_{22})j_{12} = 0, \tag 8$
$(j_{11} + j_{22})j_{21} = 0, \tag 9$
$j_{21}j_{12} + j_{22}^2 = -1. \tag{10}$
(7)-(10) suggest that $\text{Tr} J = j_{11} + j_{22}$ may provide a useful grip on the succeeding analysis. If
$j_{11} + j_{22} \ne 0, \tag{11}$
(8) and (9) imply
$j_{12} =j_{21} = 0, \tag{12}$
and then (7) and (10) become
$j_{11}^2 = j_{22}^2 = -1, \tag{13}$
clearly impossible for real $j_{11}$, $j_{22}$. Ruling out (11), we must have
$j_{11}+j_{22} = 0, \tag{14}$
which suggests introducting a parameter $\alpha$ with
$j_{11} = \alpha = -j_{22}; \tag{15}$
then (7) aand (10) each become
$\alpha^2 + j_{12}j_{21} = -1, \tag{16}$
so if we set
$j_{12} = \beta, \tag{17}$
we see that
$j_{21} = \dfrac{-1 - \alpha^2}{\beta} = -\dfrac{1 + \alpha^2}{\beta}; \tag{18}$
now we may present the family of all $J$ parametrically:
$J = \begin{bmatrix} \alpha & \beta \\ -\dfrac{1 + \alpha^2}{\beta} & -\alpha \end{bmatrix}; \tag{19}$
here $\alpha \in \Bbb R$ may be taken arbitrarily but we must have $0 \ne \beta \in \Bbb R$. It is easily verified that any $J$ defined by (19) satisfies (2); thus we have presented all such $J$ in parametric form in (19).
It follows that every $A$ satisfying (1) is of the form
$A = 2J, \tag{20}$
with $J$ as in (19). For example, following up on MMM's example
$A = \begin{bmatrix} 1 & 1 \\ -5 & -1 \end{bmatrix}, \tag{21}$
we take
$J = \begin{bmatrix} \dfrac{1}{2} & \dfrac{1}{2} \\ -\dfrac{5}{2} & -\dfrac{1}{2} \end{bmatrix}, \tag{21}$
which corresponds to $\alpha = \beta = 1/2$. Widawensen's example
$A = \begin{bmatrix} 2 & -1 \\ 8 & -2 \end{bmatrix} \tag{22}$
yields
$J = \begin{bmatrix} 1 & -\dfrac{1}{2} \\ 4 & -1 \end{bmatrix}, \tag{23}$
with $\alpha = 1$ and $\beta = -1/2$; we can obviously go a long way in this direction.
Finally, it is engaging to observe, in the light of Widawensen's remark on matrices $A$ with prime entries, that there are many $J$ with prime entries as well; we simply choose $\alpha$ prime such that $\alpha^2 + 1$ is the product of precisely two prime factors, $\alpha^2 + 1 = pq$; then set $\beta = p$ and so $q = (1 + \alpha^2)/p$, for instance with $\alpha = 5$, we may take $\beta = 2$ and find
$J = \begin{bmatrix} 5 & 2 \\ -13 & -5 \end{bmatrix}; \tag{24}$
the list goes on, but whether it has any number-theoretic use or significance, I do not know . . .
Best Answer
I think I have understood that your (non standard !) notation $I_2$ means (by Cramer's rule) substitution of the second column of $I$ (whence number 2) by vector $x$:
$$\begin{vmatrix}1&x_1&0\\0&x_2&0\\0&x_3&1\end{vmatrix}$$
which in fact is $x_2$.
Remark : Has your lecturer advised you to use notation $I_2$ ?