[Physics] Arbitrary tensor covariant derivative

covariancedifferentiationtensor-calculus

what are the rules for performing covariant derivatives on tensors of arbitrary rank?

I found a few examples of Tensor derivatives:
$$\nabla_{c} T^a {}_{b} = \partial_{c}T^a {}_{b}+ \Gamma^a{}_{cd} T^d {}_{b} – \Gamma^d {}_{bc}T^a {}_{d}$$

$$\nabla_{c} T {}_{ab} = \partial_{c}T {}_{ab}- \Gamma^d{}_{ac} T {}_{db} – \Gamma^d {}_{cb}T {}_{ab}$$

$$\nabla_{c} T^{ab} = \partial_{c}T^{ab}+ \Gamma^a{}_{cd} T^{db} + \Gamma^b {}_{dc}T^{ad}$$

It seems to me, that you take the partial derivative of the tensor in question, and add tensor terms like to $\Gamma^{x}{}_{yz}$ along with the original tensor with the index labeled slightly differently for each index. The terms are +ve if the index is contravariant and -ve if the index is covarient. Why is this true?

The indices orientation confuses me, could someone tell me a general rule or formula which I can follow for any tensor rank.

Cheers

Best Answer

The rules are simple but long, as stated in Wikipedia. I'll elaborate on them here.

  • We start with a rank $(r,s)$ tensor $T$ in $d$ dimensions.

  • We seek the $d^{r+s+1}$ components of $\nabla T$, a rank $(r,s+1)$ tensor.

  • We can think of these components as collected into $(r,s)$ sets of $d^{r+s}$, one for each of the $d$ values of the free index $\gamma$ in the expression $\nabla_\gamma T$.

  • To find $(\nabla_\gamma T)^{\alpha_1\cdots\alpha_r}{}_{\beta_1\cdots\beta_s}$ (that is, the upper $\alpha_1\cdots\alpha_r$, lower $\beta_1\cdots\beta_s$ component of the lower $\gamma$ component of $\nabla T$), we start by taking the partial derivative of the appropriate component: $\partial_\gamma (T^{\alpha_1\cdots\alpha_r}{}_{\beta_1\cdots\beta_r})$.

  • Now we need to correct for the curvature of the manifold. This is the crux of covariant differentiation. Basically, we were blithely moving along an infinitesimal amount in the $\gamma$-direction, tracking how all the components of $T$ seemed to change to first order in our motion. Even though each and every point in the manifold has its own, separate tangent space wherein lives a version of $T$, we can connect nearby points via the connection coefficients $\Gamma$. If we had Cartesian coordinates in a flat manifold, there would be no need for a correction. In general, though, the tangent spaces are themselves shifting around underneath our feet as we inch along in the $\gamma$-direction.

    • For each upper index $\alpha_i$, add a term $\Gamma^{\alpha_i}_{\sigma\gamma} T^{\alpha_1\cdots\alpha_{i-1}\sigma\alpha{i+1}\cdots\alpha_r}{}_{\beta_1\cdots\beta_r}$.

    • For each lower index $\beta_j$, subtract a term $\Gamma^\sigma_{\beta_j\gamma} T^{\alpha_1\cdots\alpha_r}{}_{\beta_1\cdots\beta_{j-1}\sigma\beta{j+1}\cdots\beta_s}$.

In the end, we have a "tensor" equation, which is really a relation between a bunch of scalars ranging over various free indices. Probably the most confusing thing is the omission of parentheses by most authors, as discussed in this answer.

This result is the unique definition of covariant differentiation that respects all the rules and structure we want. As you could tediously check, it is linear, it obeys the Leibniz rule for products of tensors, and it reduces to partial differentiation on rank $(0,0)$ tensors (i.e. scalars).

Related Question