When writing p-forms and p-vectors, is the increasing index values restriction $\vert i,j,\dots,k\vert$ equivalent to “unique sets” of index values

combinatoricsdifferential-formsexterior-algebratensors

This is my motivation. In the case of $\mathbb{R}^3,$ the wedge product of two vectors is identical to the cross product. The way I have written this for the first four expressions is consistent with the traditional form of the cross product. Note that the last term has $i=3,j=1.$
The remaining expressions show this to be equivalent to the typical form used in exterior calculus; where the vertical bars indicate that index values are to be taken over the range for which they are in increasing order.

\begin{align*}
\mathfrak{a}\wedge\mathfrak{b}= & \hat{\mathfrak{e}}_{i}\wedge\hat{\mathfrak{e}}_{j}a^{i}b^{j}\\
= & \begin{pmatrix}+\hat{\mathfrak{e}}_{1}\wedge\hat{\mathfrak{e}}_{2}\left(a^{1}b^{2}-a^{2}b^{1}\right)\\
+\hat{\mathfrak{e}}_{2}\wedge\hat{\mathfrak{e}}_{3}\left(a^{2}b^{3}-a^{3}b^{2}\right)\\
+\hat{\mathfrak{e}}_{3}\wedge\hat{\mathfrak{e}}_{1}\left(a^{3}b^{1}-a^{1}b^{3}\right)
\end{pmatrix}\\
= & \hat{\mathfrak{e}}_{1}\wedge\hat{\mathfrak{e}}_{2}c^{12}+\hat{\mathfrak{e}}_{2}\wedge\hat{\mathfrak{e}}_{3}c^{23}+\hat{\mathfrak{e}}_{3}\wedge\hat{\mathfrak{e}}_{1}c^{31}\\
=&\hat{\mathfrak{e}}_{1}\wedge\hat{\mathfrak{e}}_{2}c^{12}+\hat{\mathfrak{e}}_{1}\wedge\hat{\mathfrak{e}}_{3}c^{13}+\hat{\mathfrak{e}}_{2}\wedge\hat{\mathfrak{e}}_{3}c^{23}\\
= & \hat{\mathfrak{e}}_{\vert i}\wedge\hat{\mathfrak{e}}_{j\vert}\left(a^{i}b^{j}-a^{j}b^{i}\right)\\
= & \hat{\mathfrak{e}}_{\vert i}\wedge\hat{\mathfrak{e}}_{j\vert}c^{ij}=\frac{1}{2}\hat{\mathfrak{e}}_{i}\wedge\hat{\mathfrak{e}}_{j}c^{ij}
\end{align*}

Now consider the case of two indices ranging over four values. There are six (unordered) sets.

$$
\binom{n}{k}=\frac{n!}{k! (n-k)!}\Rightarrow \binom{4}{2}=\frac{4!}{4}=3!$$

Let $e=\pm 1$ and $f=\pm 1$. Call those pairs equal to 1, even.

\begin{align*}
\left[ij\right]= & -\left[ji\right]\\
e= & \left[12\right]=\left[23\right]=\left[34\right]=\left[41\right]\\
-e= & \left[21\right]=\left[32\right]=\left[43\right]=\left[14\right]\\
f= & \left[13\right]=\left[24\right]\\
-f= & \left[31\right]=\left[42\right]
\end{align*}

The notation $\vert ij \vert$ now means to sum over even pairs. The first expanded sum uses $1=e=f$. The second uses $1=e=-f.$

\begin{align*}
\hat{\mathfrak{e}}_{\vert i}\wedge\hat{\mathfrak{e}}_{j\vert}T^{ij}= & \begin{pmatrix}+\hat{\mathfrak{e}}_{1}\wedge\hat{\mathfrak{e}}_{2}T^{12}+\hat{\mathfrak{e}}_{4}\wedge\hat{\mathfrak{e}}_{1}T^{41}\\
+\hat{\mathfrak{e}}_{2}\wedge\hat{\mathfrak{e}}_{3}T^{23}+\hat{\mathfrak{e}}_{1}\wedge\hat{\mathfrak{e}}_{3}T^{13}\\
+\hat{\mathfrak{e}}_{3}\wedge\hat{\mathfrak{e}}_{4}T^{34}+\hat{\mathfrak{e}}_{2}\wedge\hat{\mathfrak{e}}_{4}T^{24}
\end{pmatrix}\\
= & \begin{pmatrix}+\hat{\mathfrak{e}}_{1}\wedge\hat{\mathfrak{e}}_{2}T^{12}+\hat{\mathfrak{e}}_{4}\wedge\hat{\mathfrak{e}}_{1}T^{41}\\
+\hat{\mathfrak{e}}_{2}\wedge\hat{\mathfrak{e}}_{3}T^{23}+\hat{\mathfrak{e}}_{3}\wedge\hat{\mathfrak{e}}_{1}T^{31}\\
+\hat{\mathfrak{e}}_{3}\wedge\hat{\mathfrak{e}}_{4}T^{34}+\hat{\mathfrak{e}}_{4}\wedge\hat{\mathfrak{e}}_{2}T^{42}
\end{pmatrix}
\end{align*}

Since both the wedge product and the tensor component toggle signs when the index order is reversed, the choice of which six unique pairs are designated "even" is irrelevant. What matters is that we are summing over unique sets of index values.

It seems obvious to me that this generalizes to any number of indices and any range of index values. I don't recall having ever seen that (apparent) fact stated. Is it correct?

For example:

From Misner, Thorne and Wheeler, Exercise 4.12(b)
enter image description here

$\dots$The final line here introduces the convention that a summation over indices enclosed between vertical bars includes only those terms with indices in increasing order.

Could this be replaced by 'a summation over indices enclosed between vertical bars includes only terms with unique sets of index values', without changing the meaning of the expression?

This seems blatantly obvious to me (though working out an inductive proof is not immediately obvious).

And, even if it applies here, is there a context in which this redefinition would produce a non-equivalent mathematical expression?

Best Answer

I think I finally understand your question and the answer is simply Yes.

In an expression like $$\sum_{j_1<\dots< j_p}\delta^{h_1\dots h_p}_{j_1,\dots j_p}\,dx^{j_1}\wedge\dots \wedge dx^{j_p}=\frac{1}{p!}\delta ^{h_1\dots h_p}_{j_1\dots j_p}\,dx^{j_1}\wedge \dots \wedge dx^{jp}$$ we divide by $p!$ because among the $p!$ permutations of $p$ distinct integers $j_1,\dots j_p$ there is only one for which $j_1<j_2<\dots<j_p$. But of course you are free to pick one of the other $p!-1$ permutations if you are so inclined.

Further, for any $A_{h_1\dots h_p}$, irrespective of symmetry or skew-symmetry properties, we have $$A_{h_1\dots h_p}\,dx^{h_1}\wedge \dots \wedge dx^{h_p}=\sum_{j_1<\dots <j_p} \delta^{h_1\dots h_p}_{j_1\dots j_p}A_{h_1\dots h_p}\,dx^{j_1}\wedge \dots \wedge dx^{j_p}$$

And again, if you want, you can pick another permutation. This works because both $\delta_{h_1\dots h_p}^{j_1\dots j_p}$ and $dx^{j_1}\wedge \dots \wedge dx^{j_p}$ are skew-symmetric in $j_1,\dots j_p$.