Abstract tensor index notation for matrix transpose as (1,1) tensor

linear algebratensorstranspose

In this answer to this question about the transpose of a (1,1) tensor, the answerer gives the following equation:
$$(A^T)_j{}^i=A^i{}_j$$
as the transpose of the tensor $A$. My question is, what does the LHS and RHS of this equation mean precisely, and what is the general rule that allows one to write an equation like this? My confusion is that $(A^T)_j{}^i$ seems to be an element of the space $V^* \otimes W$ while $A^i{}_j$ an element of $W \otimes V^*$. These are different spaces so we shouldn't be able to write an equality there. This suggests that this equation can only be meaningfully interpreted if we interpret the indices as actual indices of a multi-dimensional array, rather than as abstract index notation for tensors, which means we chose a basis. But then how would we write the transpose of a (1,1) tensor abstractly, without reference to a basis?

Best Answer

The linked post comes from Physics.SE and, in physics, the distinction between indices which label the entry in a multi-dimensional array and abstract indices is not always made.

In the first case, we are only dealing with an equality between matrices, which happens to hold in any basis. This is possible because the two spaces $V^*\otimes W$ and $W\otimes V^*$ are canonically isomorphic.

To deal with this in abstract index, we take the convention that permutations of indices represent the corresponding braiding maps.

Given $n$-vector spaces $V_1\otimes \ldots \otimes V_n$ and a permutation $\sigma \in\mathfrak S_n$, there is a natural braiding map : $$\tau_\sigma : V_1\otimes \ldots\otimes V_n \to V_{\sigma(1)}\otimes \ldots V_{\sigma(n)}$$

If $T \in V\otimes W$ and $R \in W\otimes V$, then $T_{ab} = R_{ba}$ means $T = \tau_{(12)}R$.

In our case, we have $A^T = \tau_{(12)}A$ with $\tau_{(12)}$ the braiding map $W\otimes V^*\to V^*\otimes W$.

Related Question