The reasoning/need for writing tensor indices like $T^i{\,}_j^{{\,\,}k}$ instead of $T_{j}^{ik}$ (or, $A^i_{\,\,\,j}$ instead of $A^i_j$)

intuitiontensor-ranktensors

In one of the comments to this previous question I wanted to know why one of the users that gave an answer kept writing connection terms as ${\Gamma}^{\,\,\,l}_{h\,\,k}$. Until I read that answer I have never seen tensor indices represented in this fashion and in all of my general relativity notes I see them written as $\Gamma_{hk}^l$. Throughout the notes, tensors of any rank were simply being written without shifting of the horizontal position. Even for rank $2$ tensors they were being written as $A^i_j$ instead of $A^i_{\,\,\,j}$.

Before asking this question, I searched the web and this site for an explanation and found this question on this site. While this question is related to what I am asking about in this post, the OP of that linked question answers his own question, and I'm not sure I follow his logic.
However, one comment was made directly below the question stating:

one uses $M^i{\,}_j$ to stress the information that $i$ is for rows and $j$ is for columns. The symbolism $M^i_j$ is confusing.

But now this raises a further question; what happens for tensors with a rank greater than $2$?

For the sake of argument, let's say we have a rank $3$ tensor $T^i{\,}_j^{{\,\,}k}$, so if $i$ represents the number of rows as it is furthest left horizontally, and $j$ represents the number of columns, then what does $k$ represent?

Matrices only have rows and columns so I don't see how notation like ${\Gamma}^{\,\,\,l}_{h\,\,k}$ can possibly make sense. In fact, I thought that for a rank $2$ tensor, say, $A^i_j$ the top (contravariant) index represents the number of rows and the lower (covariant) index represents the number of columns (when viewed as a matrix).

The closest I managed to get to an answer to why the upper or lower index is shifted relative to the other is in this post on Physics Stack Exchange. While answers to the question do explain that it is the horizontal position of the indices that dictate whether they correspond to rows or columns of a matrix. But this still does not explain why I cannot write contravariant indices for rows and covariant indices for columns, ie. $A^i_j$ keeping both the $j$ and the $k$ in the same horizontal position ($i$ directly above the $j$).


So to summarize, what is the fundamental reasoning for shifting the horizontal position of the lower index relative to the upper index (or vice versa) and how do I interpret this as row/columns for a rank $3$ tensor (or above)?

Best Answer

This happens because some (a lot of) people like to use the shorthand of raising and lowering indices. With this I mean the notation $$v_i = \sum_j g_{ij}v^j. \tag 1$$ If $v^i$ are the components of a vector $v\in V$ with respect to the basis $\{e_1,\dots,e_n\}$ and you fix a metric $g$ on $V$, then the numbers $v_i$ are the components of the linear map $w\mapsto g(v,w)$ with respect to the dual basis $\{e^1,\dots,e^n\}$ of $V^*$.

This notation causes no ambiguity when you "lower the index" of a vector or when you "raise the index" of a covector, but it may cause ambiguity if you try to extend this shorthand to tensors of higher rank. I will now elaborate my point.

Suppose you have a tensor $T\in V\otimes V$. You can write it in components as $$\sum_{ij}T^{ij}e_i\otimes e_j$$ where $e_i\otimes e_j$ are the basis vectors of $V\otimes V$. There is an associated linear map $R:V\to V$ that takes the vector $w\in V$ to the vector $$R(w) = \sum_{ijk} T^{ik}g_{kj}w^j e_i \in V.$$ Then $R$ can be written as $$R = \sum_{ijk}T^{ik}g_{kj}e^j\otimes e_i. \tag 2$$

Notice that there is another linear map $S:V\to V$ defined by taking the vector $w\in V$ to the vector $$S(w) = \sum_{ijk}T^{ki}g_{kj}w^j e_i \in V$$ so that $$S = \sum_{ijk}T^{ki}g_{kj}e^j\otimes e_i \in V. \tag 3$$ In general $R\neq S$. The difference lies in which index from $T$ you are summing over.

Now express $R$ and $S$ in components with respect to the basis $e^j\otimes e_i$ of $V^*\otimes V$ as follows \begin{align} R &= \sum_{ij} R^i_j e^j\otimes e_i \\ S &= \sum_{ij} S^i_j e^j\otimes e_i \end{align} There is no problem at all. $R^i_j$ and $S^i_j$ are just defined to be the components of $R$ and $S$. By equations $(2)$ and $(3)$, these components relate to the components of $T$ by the equations \begin{align} R^i_j &= \sum_{k}T^{ik}g_{kj}, \\ S^i_j &= \sum_{k}T^{ki}g_{kj}. \end{align} Indeed, given any $T\in V\otimes V$, the last two equations completely determine the components two objects $R$ and $S$ which are elements of $V^*\otimes V$. Looking at the numbered equations you can see that the components of both $R$ and $S$ are related to $T$ in the same manner as the components $v_i$ are related to the components $v^i$ above, but the notations $R^i_j$ and $S^i_j$ don't make this evident. That's why a lot of people like to say that what we have done is "lowering the indices of $T$". The difference is which index we lowered, and indeed there is a difference: \begin{align} R^i_j &= \sum_{k}T^{ik}g_{kj} = T^i{}_j, \\ S^i_j &= \sum_{k}T^{ki}g_{kj} = T_j{}^i. \end{align} You can see that if we forgot to keep the horizontal spacing and write $T^i_j$ for both objects, we wouldn't know whether we are talking of the components of $R$ or the components of $S$. That is, we wouldn't know whether we lowered the first or the second index of $T^{ij}$.

Edit: addressing the specific case about which FutureCop (OP) is asking about, the notation $$\Gamma_h{}^l{}_k$$ makes sense if you are (say) planning to lower the middle index: $$\Gamma_{hlk} = \sum_m g_{lm}\Gamma_h{}^m{}_k.$$ If you do this without taking care of the horizontal spacing: $$\Gamma^l_{hk}$$ can you unambiguously say what does, e.g., $\Gamma_{abc}$ mean?