Conversion of mixed tensors into mixed tensors and into covariant (or contravariant) ones

index-notationspecial-relativitytensors

I am an undergraduate student of Physics, currently taking a course on Special Relativity, but I am getting too confused with tensors and their indices.
My question is: How to convert mixed tensors to contravariant or covariant tensors, and is it possible to interchange indices in a mixed tensor?

Best Answer

A $(p, q)$-tensor on a real vector space $V$ is a multilinear map $T : (V^*)^p\times V^q \to \mathbb{R}$.

Let $\{e_1, \dots, e_n\}$ be a basis for $V$ and $\{e^1,\dots, e^n\}$ the dual basis of $V^*$, then the tensor $T$ is determined by the collection of real numbers $T^{i_1, \dots, i_p}_{j_1, \dots, j_q} := T(e^{i_1},\dots, e^{i_p}, e_{j_1}, \dots, e_{j_q})$. If $\{\hat{e}_1, \dots, \hat{e}_n\}$ is another basis for $V$ and $\{\hat{e}^1, \dots, \hat{e}^n\}$ is the corresponding dual basis, then we get another collection of real numbers $\hat{T}^{i_1',\dots, i_p'}_{j_1', \dots, j_q'} := T(\hat{e}^{i_1'}, \dots, \hat{e}^{i_p'}, \hat{e}_{j_1'},\dots, \hat{e}_{j_q'})$.

If $A$ denotes the change of basis matrix from $\{e_1, \dots, e_n\}$ to $\{\hat{e}_1, \dots, \hat{e}_n\}$ then, using the Einstein summation convention, we have $\hat{e}_i = A^k_ie_k$. The change of basis matrix from $\{e^1, \dots, e^n\}$ to $\{\hat{e}^1, \dots, \hat{e}^n\}$ is $A^{-1}$ so $\hat{e}^j = (A^{-1})^j_k e^k$. It follows that

$$\hat{T}^{i_1',\dots,i_p'}_{j_1',\dots,j_q'} = T^{i_1,\dots,i_p}_{j_1,\dots,j_q}(A^{-1})^{i_1'}_{i_1}\dots(A^{-1})^{i_p'}_{i_p}A^{j_1}_{j_1'}\dots A^{j_q}_{j_q'}.$$

In physics, a $(p, q)$-tensor is often considered as a collection of real numbers $T^{i_1,\dots, i_p}_{j_1,\dots, j_q}$ which transforms under change of basis in the way stated above. As the indices $j_1, \dots, j_q$ change according to the change of basis matrix, we say that they are covariant, while the indices $i_1, \dots, i_p$ change according to the inverse of the change of basis matrix, so we say that they are contravariant. Hence a $(p, q)$-tensor has $p$ contravariant indices and $q$ covariant indices.

Examples:

  • A $(0, 1)$-tensor is nothing but a linear map $V \to \mathbb{R}$.
  • Given a vector $v \in V$, one obtains a $(1, 0)$-tensor $T_v$ defined by $T_v(\alpha) = \alpha(v)$.
  • An inner product on $V$ is an example of a $(0, 2)$-tensor.
  • A linear map $L : V \to V$ can be viewed as a $(1, 1)$-tensor $T_L$ defined by $T_L(\alpha, v) = \alpha(L(v))$.

A (not necessarily positive-definite) inner product $g$ defines an isomorphism $\Phi_g : V \to V^*$ given by $\Phi_g(v) = g(v, \cdot)$. This isomorphism can be used to transform a $(p, q)$-tensor $T$ into a $(p - 1, q + 1)$-tensor $T'$ by defining $T'(\alpha^1, \dots, \alpha^{p-1}, v_1, \dots, v_{q+1}) := T(\alpha^1, \dots, \alpha^{p-1}, \Phi_g(v_1), v_2, \dots, v_{q+1})$. Likewise, the inverse isomorphism $\Phi_g^{-1}$ can be used to transform a $(p, q)$-tensor into a $(p + 1, q - 1)$-tensor. Doing this repeatedly, we can view a $(p, q)$-tensor as an $(r, s)$-tensor for any $r$ and $s$ with $r, s \geq 0$ and $r + s = p + q$. Note however that the $(r, s)$-tensor we produce depends on the inner product $g$; for a different inner product, the corresponding $(r, s)$-tensor will not be the same.


A $(p, q)$-tensor field on a smooth manifold $M$ is $C^{\infty}(M)$ multilinear map $T : \Gamma(T^*M)^p\times\Gamma(TM)^q \to C^{\infty}(M)$. That is, a $(p, q)$-tensor on $T_xM$ for every $x \in M$ which varies smoothly as $x$ varies.

Given local coordinates $(x^1, \dots, x^n)$ on $U \subseteq M$, there is a basis of sections for $TM|_U$ given by $\{\partial_1, \dots, \partial_n\}$ where $\partial_i = \frac{\partial}{\partial x^i}$, and a dual basis of sections for $T^*M|_U$ given by $\{dx^1, \dots, dx^n\}$. We then obtain a collection of smooth functions $T^{i_1,\dots,i_p}_{j_1,\dots,j_q} := T(dx^{i_1},\dots, dx^{i_p}, \partial_{j_1}, \dots, \partial_{j_q})$ on $U$. If $\{\hat{x}^1, \dots, \hat{x}^n\}$ is another set of local coordinates on $U$, then $\{\hat{\partial}_1, \dots, \hat{\partial}_n\}$ is a basis of sections for $TM|_U$ where $\hat{\partial}_i = \frac{\partial}{\partial\hat{x}^i}$, and $\{d\hat{x}^1,\dots, d\hat{x}^n\}$ is the dual basis of sections for $T^*M|_U$, so we get another collection of smooth functions $\hat{T}^{i_1',\dots,i_p'}_{j_1',\dots,j_q'} := T(d\hat{x}^{i_1'},\dots, d\hat{x}^{i_p'}, \hat{\partial}_{j_1'},\dots, \hat{\partial}_{j_q'})$ on $U$.

Note that $\hat{\partial}_i = \dfrac{\partial x^k}{\partial \hat{x}^i}\partial_k$ and $d\hat{x}^j = \dfrac{\partial \hat{x}^j}{\partial x^k}dx^k$ so

$$\hat{T}^{i_1',\dots,i_p'}_{j_1',\dots,j_q'} = T^{i_1,\dots,i_p}_{j_1,\dots,j_q}\dfrac{\partial \hat{x}^{i_1'}}{\partial x^{i_1}}\dots \dfrac{\partial \hat{x}^{i_p'}}{\partial x^{i_p}}\dfrac{\partial x^{j_1}}{\partial \hat{x}^{j_1'}}\dots \dfrac{\partial x^{j_q}}{\partial \hat{x}^{j_q'}}$$

Recall that $\left(\dfrac{\partial\hat{x}}{\partial x}\right)^{-1} = \dfrac{\partial x}{\partial\hat{x}}$, so the above is completely analogous to the previous formula for tensors.

Examples:

  • A $(0, 1)$-tensor field is nothing but a one-form.
  • Given a vector field $V \in \Gamma(TM)$, one obtains a $(1, 0)$-tensor field $T_V$ defined by $T_V(\alpha) = \alpha(V)$.
  • A Riemannian or Lorentzian metric on $M$ is an example of a $(0, 2)$-tensor field.
  • A bundle map $L : TM \to TM$ can be viewed as a $(1, 1)$-tensor $T_L$ defined by $T_L(\alpha, V) = \alpha(L(V))$.

As in the tensor case, given a Riemannian or Lorentzian metric (or a non-degenerate metric of any signature), one can transform a $(p, q)$-tensor field into a $(r, s)$-tensor field for any $r, s \geq 0$ with $r + s = p + q$.