You can define a tensor agrees with the Levi-Civita symbol for orthogonal coordinate systems but that has the correct components for non-orthonormal systems.
This, and other results, can be derived in the setting of clifford algebra.
Clifford algebra deals with a "quotient" of the tensor algebra--an interesting subset, if you will, of tensors that correspond to vectors, planes, and other objects that can often be interpreted geometrically.
To facilitate this, clifford algebra introduces a "geometric product" of vectors, which has the following laws:
- If two vectors $a, b$ are orthogonal, then $ab = -ba$ under the product.
- The product of a vector with itself is a scalar, i.e. $aa = |a|^2$.
- The product is associative: $(ab)c = a(bc)$ for all vectors $a, b, c$.
- The product is distributive over addition: $a(b+c) = ab + ac$.
From this definition, we can build up various objects that are not vectors but are produced from products of vectors under the geometric product.
With the geometric product in place, consider two vectors $a, b$, and write $b = b_\parallel + b_\perp$, the parts of $b$ parallel and perpendicular to $a$, respectively. Now then, we can write the product $ab$ as
$$ab = a b_\parallel + a b_\perp$$
The first term, $a b_\parallel$ is a scalar: $b\parallel = \alpha a$ for some scalar $\alpha$, and $aa = |a|^2$, a scalar, under rule 2.
The second term cannot be reduced, but we know from rule 1 that it anticommutes: $a b_\perp = -b_\perp a$. This is just like the cross product.
Indeed, if you write out this product with components, you get the following:
$$ab = (a^1 b^1 + a^2 b^2 + a^3 b^2) + (a^1 b^2 - a^2 b^1) e_1 e_2 + \ldots = a \cdot b + \frac{1}{2} a^i b^j e_i e_j$$
(summation implied). The latter term is called a bivector and is traditionally denoted $a \wedge b$.
You might have noticed now that we have at least three different kinds of objects: vectors, scalars, and bivectors. In clifford algebra, we number these objects by the number of vectors needed to form them, and we call this number the grade of the object. Scalars are grade-0, vectors grade-1, and bivectors grade-2. In 3d, you can also form a grade-3 object, a trivector. One choice might be $\epsilon = e_1 e_2 e_3$.
Now, what happens when you multiply a bivector with $\epsilon$?
First, the result must be a vector. Each bivector can be written as a linear combination of $e_1 e_2, e_2 e_3, e_3 e_1$, and $\epsilon$ has all of those in it. You can see that $e_1 e_2 \epsilon = e_1 e_2 e_1 e_2 e_3 = -e_3$ (use rule 1 for anticommuting swaps and rule 2 for same vectors to annihilate). The same holds for all other terms.
By convention, then, we can define a product
$$a \times b = -\epsilon (a \wedge b)$$
This coincides with the usual definition of the cross product. You can verify this term by term of you like; it's not that interesting to do algebraically, but geometrically, one comes to understand that multiplication by $\epsilon$ produces orthogonal complements of subspaces: a vector goes to its complementary plane, a plane to its normal vector, and so on. That is why I called this 3-vector $\epsilon$, as well: its components are those of the correct Levi-Civita tensor (not the symbol) that should have different components in nonorthonormal coordinate systems. And this is exactly what is meant in differential forms parlance when one uses the Hodge star operator.
Outside of 3d, the dual of a bivector is no longer a vector (4d: a bivector has another bivector totally complementary to it), and so the cross product as we typically imagine it no longer makes sense.
Well, it’s easy to find such gradient. You mentioned almost everything you need, but these things —
$\boldsymbol{p} \times \boldsymbol{q} = - \, \boldsymbol{q} \times \boldsymbol{p}\,$ for any two vectors $\boldsymbol{p}$ and $\boldsymbol{q}$
partial derivative of any vector with respect to scalar, like coordinate, isn’t some more complex tensor, it’s a vector too
for differentiation of a product “$\circ$” of two multipliers, the famous “product rule” https://en.wikipedia.org/wiki/Product_rule applies: $\frac{\partial}{\partial q^i} \left( u \circ v \right) = \left( \frac{\partial}{\partial q^i} u \right) \! \circ v \, + \, u \circ \! \left( \frac{\partial}{\partial q^i} v \right)$
So you have
$$
{\boldsymbol{\nabla} \! \left( \boldsymbol{a} \times \boldsymbol{b} \right)}
= {\boldsymbol{r}^i \partial_i \! \left( \boldsymbol{a} \times \boldsymbol{b} \right)}
= {\boldsymbol{r}^i \! \left( \partial_i \boldsymbol{a} \times \boldsymbol{b} +
\boldsymbol{a} \times \partial_i \boldsymbol{b} \right)}
= {\boldsymbol{r}^i \! \left( \partial_i \boldsymbol{a} \times \boldsymbol{b} -
\partial_i \boldsymbol{b} \times \boldsymbol{a} \right)}
= {\boldsymbol{r}^i \partial_i \boldsymbol{a} \times \boldsymbol{b} -
\boldsymbol{r}^i \partial_i \boldsymbol{b} \times \boldsymbol{a}}
= {\boldsymbol{\nabla} \boldsymbol{a} \times \boldsymbol{b} -
\! \boldsymbol{\nabla} \boldsymbol{b} \times \boldsymbol{a}}
$$
Minus sign before the second addend appears here, because you need to swap multipliers to get differentiation of the first multiplier to have full $\boldsymbol{\nabla}$
When a first multiplier, say $\boldsymbol{\phi}$, is constant in space, that is it doesn’t change with coordinates, then it becomes
$$
{\boldsymbol{\nabla} \! \left( \boldsymbol{\phi} \times \boldsymbol{b} \right)}
= {\boldsymbol{\nabla} \! \boldsymbol{\phi} \times \boldsymbol{b} -
\! \boldsymbol{\nabla} \boldsymbol{b} \times \boldsymbol{\phi}}
= {{^2{\boldsymbol{0}}} \times \boldsymbol{b} - \! \boldsymbol{\nabla} \boldsymbol{b} \times \boldsymbol{\phi}}
= {{^2{\boldsymbol{0}}} - \! \boldsymbol{\nabla} \boldsymbol{b} \times \boldsymbol{\phi}}
= {- \boldsymbol{\nabla} \boldsymbol{b} \times \boldsymbol{\phi}}
$$
That’s why there’s a minus sign
Best Answer
The general setting for defining a cross product of two vectors is as follows. Consider an $3$-dimensional, oriented, pseudo inner-product space $(V,\text{Or}, g)$, and let $\mu$ be the volume form on $V$ defined with respect to the given orientation and $g$ (see this question and answer of mine for more about the definition of this volume form).
Now, given $v,w\in V$, we define \begin{align} v\times w&:= g^{\sharp}\left(\mu(v,w,\cdot)\right) \end{align} Here, $g^{\flat}:V\to V^*$, $x\mapsto g(x,\cdot)$ is the flat-mapping, which one can easily show is an isomorphism, and $g^{\sharp}:V^*\to V$ is the inverse isomorphism. Just so we're clear: $\mu$ is the volume form on $V$, which means it is an alternating multilinear mapping $V\times V\times V\to\Bbb{R}$, so by feeding in $v,w$, we have $\mu(v,w,\cdot)\in V^*$, hence by applying $g^{\sharp}$ to it, we get an element of $V$. (and in the case of $\Bbb{R}^3$ with the standard inner product, we have $\mu=\det$ being the usual determinant of a matrix thought of as an alternating $(0,3)$-tensor on $\Bbb{R}^3$... and it coincides with the usual definition).
Now, extracting the components is basic linear algebra. Let $\{f_1,f_2,f_3\}$ be a positively-oriented (otherwise, there will be an extra minus sign at the end) basis (we make no assumptions about orthogonality) of $V$, and let $\{\phi^1,\phi^2,\phi^3\}$ be the dual basis for $V^*$. Then, as shown in my linked answer, one has $\mu= \sqrt{|g|_F}\,\phi^1\wedge \phi^2\wedge \phi^3$, where the square root is short for the following $3\times 3$ determinant: \begin{align} \sqrt{|g|_F}:= \sqrt{|\det [g(f_i,f_j)]_{i,j=1}^3|} \end{align} Also, I shall use the notation $g_{ab}=g(f_a,f_b)$ and $g^{ab}$ shall denote the $(a,b)$-entry of the matrix inverse to $[g_{ij}]_{i,j=1}^3$. I hope you remember how elements of $V$ and $V^*$ can be expanded in terms of their bases. In what follows, all indices range over $1,2,3$, regardless of whether I use Latin/Greek indices. We have \begin{align} v\times w&:=g^{\sharp}\bigg(\mu(v,w,\cdot)\bigg)\\ &=g^{\sharp}\bigg(\mu(v,w,f_{\alpha})\phi^{\alpha}\bigg)\\ &=\mu(v,w,f_{\alpha})\cdot g^{\sharp}(\phi^{\alpha})\tag{$g^{\sharp}$ is linear}\\ &= \mu(v^{\beta}f_{\beta},w^{\gamma}f_{\gamma},f_{\alpha})\cdot g^{i\alpha}f_i\\ &= \mu(f_{\beta},f_{\gamma},f_{\alpha})\cdot g^{i\alpha}v^{\beta}w^{\gamma}\,f_i\\ &=\left(\sqrt{|g|_F}\,\phi^1\wedge \phi^2\wedge \phi^3\right)(f_{\beta},f_{\gamma},f_{\alpha})\cdot g^{i\alpha}v^{\beta}w^{\gamma}\,f_i\\ &= \sqrt{|g|_F}\,\, \varepsilon_{\beta\gamma\alpha}\cdot g^{i\alpha}v^{\beta}w^{\gamma}\,f_i \end{align} In the last line, I used the fact that $\phi^1\wedge\phi^2\wedge\phi^3(f_{\beta},f_{\gamma},f_{\alpha})=\det(e_{\beta},e_{\gamma},e_{\alpha})=\varepsilon_{\beta\gamma\alpha}$, where $e_1,e_2,e_3$ are the standard basis vectors of $\Bbb{R}^3$ (the reason why this holds, you just unwind the definition of the wedge product, and use the fact that the $\phi$'s are dual to the $f$'s, so eventually, determinants start popping up).
In other words, if we take a look at the $i^{th}$ component only, we have
\begin{align} (v\times w)^i&=\sqrt{|g|_F}\,\, \varepsilon_{\beta\gamma\alpha}\cdot g^{i\alpha}v^{\beta}w^{\gamma}\\ &=\sqrt{|g|_F}\,\,g^{i\alpha}\varepsilon_{\alpha\beta\gamma}v^{\beta}w^{\gamma}, \end{align} where in the last line, I permuted the indices in the Levi-Civita symbol appropriately to make things "look nice". Note that if $g$ is a proper inner-product and the vectors $\{f_1,f_2,f_3\}$ are orthonormal, then the determinant equals $1$ and we have a Kronecker delta, so \begin{align} (v\times w)^i&=\sum_{\beta,\gamma=1}^3\varepsilon_{i\beta\gamma}v^{\beta}w^{\gamma}, \end{align} which is exactly what we expect