[Math] the practical difference between abstract index notation and “ordinary” index notation

differential-geometrytensors

I understand that in "normal" index notation the indexes can be thought of as coordinates of scalar values inside a tabular data structure, while in the abstract index notation they can not. However, I am not clear on what practical difference this makes when actually doing math. If your are doing numerical calculations then you need to plug actual components into your tensors, so that is not abstract, but is there any difference if you are doing symbolic/algebraic computations? The notations look identical, and even though the interpretation is different, expressions in both cases ultimately denote tensors. As far as I know the algebraic laws are the same. Are there manipulations that are valid in one but not in the other? If you see some tensor calculations, how can you tell if abstract index notation is being used? If you are doing differential geometry with indexes do you need to decide if your indexes are abstract or not? Or am I just missing something?

Best Answer

First I'd like to share my understanding of abstract index notation. I think this understanding is simpler and more intuitive than Penrose and Rindler's original definition. Your question will be answered later with an example.

Abstract index notation is merely a labelling of the "slots" of the tensor. For example, $T_{ab}{}^c$ is just an abbreviation of $$T(-_a, -_b, -^c).$$ Each "slot" is a parameter as the tensor is viewed as a multilinear map $T:V\times V \times V^* \to \mathbb R$.

You may be already familiar with the labelling slots interpretation. But what does "labelling" a slot exactly mean? Here is my understanding: it means we can fill a specific slot with a vector (or dual vector) by specifying the label of the slot. For example, if we fill the slot labelled $a$ with a vector $u$, and fill the slot labelled $b$ with a vector $v$, and fill the slot labelled $c$ with a dual vector $\omega$, we get $T(u, v, \omega)$, that is $$ T(-_a, -_b, -^c)(\{a=u, \; b=v, \; c=\omega\}) = T(u, v, \omega). $$ Note here a set-like notation $\{a=u, \; b=v, \; c=\omega\}$ is used meaning that the order is irrelevant, $\{b=v, \; a=u, \; c=\omega\}$ and $\{a=u, \; b=v, \; c=\omega\}$ are just the same.

There are two observations from this definition of filling slots:

  1. The position order of the slots is significant. $$S_{ab} \neq S_{ba}$$ in the sense that $$ S(-_a, -_b)(\{a=u, \; b=v\}) = S(u, v) \neq S(v, u) = S(-_b, -_a)(\{a=u, \; b=v\}). $$

  2. An index letter can be substituted with any Latin letter, since it's just a label for the slot. For example $T_{ab} = S_{ab}$ implies $T_{cd} = S_{cd}$. Because $T_{ab} = S_{ab}$ means for any vector $u$ and $v$ $$ T(-_a, -_b)(\{a=u, \; b=v\}) = S(-_a, -_b)(\{a=u, \; b=v\}), $$ that is $$ T(u, v) = S(u, v). $$ And $T_{cd}=S_{cd}$ is equivalent to $T(u, v) = S(u, v)$ too. Note this index substitution is different from index reordering in observation 1, index substitution must be applied on both sides of an equation, We can't exchange $a$ and $b$ from only one side of the equation $S_{ab}=S_{ab}$ to get $S_{ab}=S_{ba}$.

Now we can use abstract index notation to denote tensor product, and contraction operation in a coordinate-free way. For example $U_{abcd} = T_{ab}S_{cd}$ denotes the tensor product $$ U(-_a, -_b, -_c, -_d) = T(-_a, -_b) \cdot S(-_c, -_d). $$ And $T_{ae}{}^{ed}$ denotes the contraction with respect of slots $b$ and $c$ of $T_{ab}{}^{cd}$ $$ T_{ae}{}^{ed} = C_b{}^c(T_{ab}{}^{cd}) = \sum_{\sigma}T(-_a, \frac{\partial}{\partial x^\sigma}, \mathrm dx^\sigma, -^d). $$ Another important operation is (partial) application, but since it's equivalent to a tensor product followed by a contraction, there is no need to introduce a new notation. For example applying a vector $u$ to the slot $a$ of $T_{ab}$ is $$ T(u, -_b) = T_{ab}u^a. $$ where $T_{ab}u^a$ is a tensor product of $T$ and $u$ followed by a contraction: $C_a{}^c(T(-_a, -_b)\cdot u(-^c))$. The result has one free slot left, so it's a (0, 1) tensor. That is, a (0, 2) tensor partially applied with a vector is a (0, 1) tensor.

Example

Consider the following problem: suppose $$T_{abc} = \mathrm dx^\sigma_a \mathrm dx^\mu_b \mathrm dx^\nu_c,$$ where $\sigma$, $\mu$, and $\nu$ are 3 concrete numbers, then what is $T_{abc} + T_{bca}$?

This example more or less answers your question: what is the practical difference between abstract index notation and “ordinary” index notation. Abstract index notation is easier to read and understand especially when abstract indices and concrete indices are mixed.

In abstract index notation, there is a convention that abstract indices use Latin letters while concrete indices use Greek letters. So in this example we can easily see that $a$, $b$, and $c$ are abstract indices while $\sigma$, $\mu$, and $\nu$ are concrete indices.

The notation $\mathrm dx^\sigma_a$ is not very common, but it makes sense. The dual vector $\mathrm dx^\sigma$ is naturally a function that can act on a vector. The equation $T_{abc} = \mathrm dx^\sigma_a \mathrm dx^\mu_b \mathrm dx^\nu_c$ which is just an abbreviation of $$ T(-_a, -_b, -_c) = \mathrm dx^\sigma(-_a) \cdot \mathrm dx^\mu(-_b) \cdot \mathrm dx^\nu(-_c) $$ means when slot $a$ is filled with a vector $u$, $\mathrm dx^\sigma$ will act on that $u$.

To solve the problem, use index substitution, we can get $T_{bca} = \mathrm dx^\sigma_b \mathrm dx^\mu_c \mathrm dx^\nu_a$. So $$ \begin{aligned} T_{abc} + T_{bca} &= \mathrm dx^\sigma_a \mathrm dx^\mu_b \mathrm dx^\nu_c + \mathrm dx^\sigma_b \mathrm dx^\mu_c \mathrm dx^\nu_a \\ &= \mathrm dx^\sigma_a \mathrm dx^\mu_b \mathrm dx^\nu_c + \mathrm dx^\nu_a \mathrm dx^\sigma_b \mathrm dx^\mu_c \\ &= (\mathrm dx^\sigma \otimes \mathrm dx^\mu \otimes \mathrm dx^\nu + \mathrm dx^\nu \otimes \mathrm dx^\sigma \otimes \mathrm dx^\mu)(-_a, -_b, -_c). \end{aligned} $$

This is quite straightforward. On the other hand, if you use concrete index notation to solve this problem, first you need to figure out that the components of $T$ are all zero except $$T_{\xi\eta\zeta} = 1, \text{when}\; \xi=\sigma, \eta=\mu, \zeta=\nu.$$ Or $T_{\sigma\mu\nu}=1$. But what is $T_{bca}$? $T_{\mu\nu\sigma}$? No. You need to define another tensor $S_{\xi\eta\zeta}=T_{\eta\zeta\xi}$, and figure out that its components are all zero except $$ S_{\xi\eta\zeta} = 1, \text{when}\; \xi=\nu, \eta=\sigma, \zeta=\mu. $$ Then finally find the sum $T_{\xi\eta\zeta} + S_{\xi\eta\zeta}$. This procedure is quite complex and error-prone.

If you'd like to translate the equation $$ T_{abc} = \mathrm dx^\sigma_a \mathrm dx^\mu_b \mathrm dx^\nu_c $$ to concrete index notation $$T_{\xi\eta\zeta} = \mathrm dx^\sigma_\xi \mathrm dx^\mu_\eta \mathrm dx^\nu_\zeta. $$ It doesn't help much. Now $\mathrm dx^\sigma_\xi$ is a tensor whose components are all zero except $$ \mathrm dx^\sigma_\xi = 1, \text{when}\; \xi=\sigma. $$ You still need to concern about components. This is not natural. And 6 indices are mixed together, 3 of them are fixed numbers, very confusing.

Related Question