I think the best way to answer your question is, it's a mnemonic.
This mnemonic lets you get your hands on a collection of mathematical objects called "exterior forms". From this perspective, it's not a "hack" but to explain exactly what it is essentially requires discussing dual spaces and things not appropriate for standard multivariable calculus classes. [Edit: The Mathoverflow post cited in the comments above is a great discussion on how one might try to give this a formal footing.]
Here's a way to view why/how the cross product works.
Let $e_1=\langle 1,0,\dots,0 \rangle$, $e_2=\langle 0,1,\dots,0 \rangle$, $\dots$, $e_n = \langle 0,\dots,0,1\rangle$ [so in $\mathbb{R}^3$ we have $e_1=\vec{i}$, $e_2=\vec{j}$, and $e_3=\vec{k}$.]. Next, consider vectors $\vec{a}_1 =\langle a_{11},a_{12},\dots,a_{1n} \rangle$, $\vec{a}_2 =\langle a_{21},a_{22},\dots,a_{2n} \rangle$, $\dots$, $\vec{a}_{n-1} =\langle a_{(n-1)1},a_{(n-1)2},\dots,a_{(n-1)n} \rangle$. Consider the $n \times n$ "matrix"
$$ A = \begin{bmatrix} e_1 & e_2 & \cdots & e_n \\ a_{11} & a_{12} & \cdots & a_{1n} \\
\vdots & \vdots & \ddots & \vdots \\ a_{(n-1)1} & a_{(n-1)2} & \cdots & a_{(n-1)n} \end{bmatrix} $$
The determinant of $A$ is a vector (the vectors $e_1$, $e_2$, $\dots$ will only be multiplied by subdeterminants made up entirely of scalars so we never need to worry about multiplying vectors). Also, by the way the dot product is defined, $\mathrm{det}(A) \cdot \vec{b}$ just results in replacing $e_1$, $e_2$, etc. with the components of $\vec{b}$ (this is easily seen from the cofactor expansion of the determinant along the first row). That is...
$$ \mathrm{det}(A) \cdot \vec{b} = \mathrm{det}\left( \begin{bmatrix} b_1 & b_2 & \cdots & b_n \\ a_{11} & a_{12} & \cdots & a_{1n} \\
\vdots & \vdots & \ddots & \vdots \\ a_{(n-1)1} & a_{(n-1)2} & \cdots & a_{(n-1)n} \end{bmatrix} \right) $$
Now consider that a determinant of a matrix is zero if it has a repeated row. So if we dot this "cross product vector" $\mathrm{det}(A)$ with any $\vec{a}_i$, we'll get zero. Thus $\mathrm{det}(A)$ is orthogonal to $\vec{a}_1$, $\dots$, $\vec{a}_{n-1}$. Hence we have a "cross product" for $\mathbb{R}^n$.
Not a hack. Just a clean way to rig up a function which takes in $n-1$ vectors and spits out a vector perpendicular to all of its inputs.
Edit: Some people say there is no cross product except in $\mathbb{R}^3$. This is true in a certain sense. If the purpose of the cross product is to give a vector perpendicular to two other vectors, then this requires 2 dimensions of inputs determine a 1 dimensional output so we need to be working in $2+1=3$ dimensional space. (There is also a binary cross product in $\mathbb{R}^7$, but that's a long story.) However, if you don't require your "cross product" to be a binary product, it'll work in $\mathbb{R}^n$ ($n \geq 2$).
For an abstract field, $+$ and $\times$ are just symbols for two binary operations which need not be related in any way except by the distributive requirement(i.e. $a\times(b+c)=(a\times b)+(a \times c) $). We use $+$ and $\times$ because they represent operations in the fields we know and love best, the rational numbers, the real numbers and the complex numbers. You could use $\heartsuit $ and $\clubsuit $ , if you like them better.But, as Arturo pointed out, to think of multiplication as repeated addition in even these fields is dangerous. So, if your fields had elements which were say,sequences, it becomes worse, how do I add something like $(0,1,0,\cdots)$ to itself $(1,1,1\cdots)$ times?
But, this idea of "adding" elements $n$(for a natural integer) times has been thought about before and you might consider reading this to see how different things are in abstract fields.
http://en.wikipedia.org/wiki/Characteristic_(algebra)
Best Answer
So, the structure that the set of matrices forms is a non-commutative ring.