I was studying inverse matrix. Suddenly I stumbled on the inverse of 3×3 K. And it involved a division by the determinant (Well, only with numbers it was). And it was also said, about involving a division by the determinant. Then I got the inverse of a 2×2 matrix,with variables as entries(That's the general form of the 2×2 real number matrix). And it also involved a division by ad-bc(the determinant). But why is that? Maybe it's the result of something in the matrix?
Why the inverse of a matrix involves division by the determinant
determinantlinear algebramatrices
Related Solutions
I think the best way to answer your question is, it's a mnemonic.
This mnemonic lets you get your hands on a collection of mathematical objects called "exterior forms". From this perspective, it's not a "hack" but to explain exactly what it is essentially requires discussing dual spaces and things not appropriate for standard multivariable calculus classes. [Edit: The Mathoverflow post cited in the comments above is a great discussion on how one might try to give this a formal footing.]
Here's a way to view why/how the cross product works. Let $e_1=\langle 1,0,\dots,0 \rangle$, $e_2=\langle 0,1,\dots,0 \rangle$, $\dots$, $e_n = \langle 0,\dots,0,1\rangle$ [so in $\mathbb{R}^3$ we have $e_1=\vec{i}$, $e_2=\vec{j}$, and $e_3=\vec{k}$.]. Next, consider vectors $\vec{a}_1 =\langle a_{11},a_{12},\dots,a_{1n} \rangle$, $\vec{a}_2 =\langle a_{21},a_{22},\dots,a_{2n} \rangle$, $\dots$, $\vec{a}_{n-1} =\langle a_{(n-1)1},a_{(n-1)2},\dots,a_{(n-1)n} \rangle$. Consider the $n \times n$ "matrix"
$$ A = \begin{bmatrix} e_1 & e_2 & \cdots & e_n \\ a_{11} & a_{12} & \cdots & a_{1n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{(n-1)1} & a_{(n-1)2} & \cdots & a_{(n-1)n} \end{bmatrix} $$
The determinant of $A$ is a vector (the vectors $e_1$, $e_2$, $\dots$ will only be multiplied by subdeterminants made up entirely of scalars so we never need to worry about multiplying vectors). Also, by the way the dot product is defined, $\mathrm{det}(A) \cdot \vec{b}$ just results in replacing $e_1$, $e_2$, etc. with the components of $\vec{b}$ (this is easily seen from the cofactor expansion of the determinant along the first row). That is...
$$ \mathrm{det}(A) \cdot \vec{b} = \mathrm{det}\left( \begin{bmatrix} b_1 & b_2 & \cdots & b_n \\ a_{11} & a_{12} & \cdots & a_{1n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{(n-1)1} & a_{(n-1)2} & \cdots & a_{(n-1)n} \end{bmatrix} \right) $$
Now consider that a determinant of a matrix is zero if it has a repeated row. So if we dot this "cross product vector" $\mathrm{det}(A)$ with any $\vec{a}_i$, we'll get zero. Thus $\mathrm{det}(A)$ is orthogonal to $\vec{a}_1$, $\dots$, $\vec{a}_{n-1}$. Hence we have a "cross product" for $\mathbb{R}^n$.
Not a hack. Just a clean way to rig up a function which takes in $n-1$ vectors and spits out a vector perpendicular to all of its inputs.
Edit: Some people say there is no cross product except in $\mathbb{R}^3$. This is true in a certain sense. If the purpose of the cross product is to give a vector perpendicular to two other vectors, then this requires 2 dimensions of inputs determine a 1 dimensional output so we need to be working in $2+1=3$ dimensional space. (There is also a binary cross product in $\mathbb{R}^7$, but that's a long story.) However, if you don't require your "cross product" to be a binary product, it'll work in $\mathbb{R}^n$ ($n \geq 2$).
The product $A \cdot B$ is a symmetric matrix again. If I summarize your question: Given a symmetric matrix $B$ with zeros on the diagonal, is there a simple way to compute $\det(I - B)$.
Answer: There is none in general. A good approximation for a $B$ with only small entries is $$\det(I - B) = 1 - {\rm trace} (B).$$
However, consider that $I - B$ has only $1$'s on the diagonal. Thus, it should be relatively easy to reduce the problem to lower dimensions.
Best Answer
Question: "Why the inverse of a matrix involves division by the determinant?"
Answer: We use the adjugate matrix and the determinant to prove existence of an inverse of a matrix as follows:
The "adjugate matrix" $ad(A)$ has the property that $ad(A)A=Aad(A)=det(A)I$ where $det(-): Mat(n,k) \rightarrow k$ is a map with $det(AB)=det(A)det(B)$. Here $Mat(n,k)$ is the set of $n\times n$-matrices with coefficients in $k$. $det(A)$ is the "determinant" of the matrix $A$ as defined in your linear algebra course.
Lemma: A square matrix $A$ has an inverse iff $det(A)\neq 0$.
Proof: If $det(A)\neq 0$ it follows $A^{-1}:=\frac{1}{det(A)}ad(A)$ is an inverse. Conversely assume there is a matrix $B$ with $AB=BA=I$. It follows $det(AB)=det(A)det(B)=1$ and hence $det(A) \neq 0$.
Hence the adjugate matrix and the determinant map implies the existence of an inverse of $A$: The matrix $A$ has a unique inverse $A^{-1}$ iff $det(A)\neq 0$.
Example: Let \begin{align*} A= \begin{pmatrix} a & b \\ c & d \end{pmatrix} \end{align*}
and define the adjunct matrix $ad(A)$ by
\begin{align*} ad(A)= \begin{pmatrix} d & -b \\ -c & a \end{pmatrix} \end{align*}
It follows
\begin{align*} ad(A)A=Aad(A)= \begin{pmatrix} ad-bc & 0 \\ 0 & ad-bc \end{pmatrix} =\end{align*}
\begin{align*} det(A)\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \end{align*}
There is in general for any $n\times n$-matrix $A$ a unique matrix $ad(A)$ with $ad(A)A=Aad(A)=det(A)I$. This result is proved in any serious linear algebra course. Hence the above proves the Lemma explicitly for any $2\times 2$-matrix.
https://en.wikipedia.org/wiki/Adjugate_matrix