[Physics] Inverse Metric Tensor

differential-geometrygeneral-relativitymetric-tensortensor-calculus

First the setup:

  • Let $\mathcal M$ be a $2$-dimensional manifold.

  • Let $U_P$ be some open neighbourhood of a point $P \in \mathcal M$.

  • Let $\mathcal F : U_P \rightarrow \mathbb R \times \mathbb R$ be a frame function over this neighbourhood.

  • Let $\{\partial_a\}$ and $\{\tilde{d} x^a\}$ denote the basis set of vector fields and set of one-form fields over $U_P$ respectively.

Now suppose that we are given a metric tensor over $\mathcal M$ evaluated at $P$ in matrix form as follows:
$$g_P = \left( \begin{matrix}
2 & 1 \\
1 & 3 \end{matrix} \right)$$
If I'm not mistaken, this is shorthand for saying:
$$g_P = 2 (\tilde d x^1 \otimes \tilde d x^1) + (\tilde d x^1 \otimes \tilde d x^2) + (\tilde d x^2 \otimes \tilde d x^1) + 3(\tilde d x^2 \otimes \tilde d x^2)$$

Now apparently the 'inverse' metric tensor is, in matrix form, the matrix inverse of the metric tensor:
$$g_P^{-1} = \frac 1 5 \left( \begin{matrix}
3 & -1 \\
-1 & 2 \end{matrix} \right)$$
…which is shorthand for:

$$g_P^{-1} = \frac 1 5 \left[ 3(\partial_1 \otimes \partial_1) – (\partial_1 \otimes \partial_2) – (\partial_2 \otimes \partial_1) + 2(\partial_2 \otimes \partial_2)\right]$$

In what sense is this the inverse of the metric tensor? I might have expected the 'inverse' of a tensor to be its corresponding element in the dual space:

$$g^{-1} = g_{ij} \; (\partial_i \otimes \partial_j)$$

…or more likely the $ \left(\begin{smallmatrix} 2 \\ 0 \end{smallmatrix}\right) $ tensor which operates with it to give the identity. (Although there would be an unlimited number of those meaning the inverse was not unique)

Please explain the meaning of 'inverse' in this context and what it has to do with matrix inversion. I thought that matrices were simply a notational convenience; their conventional multiplicative operation does not coincide with that of tensors.

Best Answer

Recall that one-forms are defined as linear maps on vector fields to real numbers, so that for every one-form $\alpha$ and every vector field $X$, $\alpha(X)$ is a scalar function. Hence on a simple tensor $\alpha \otimes X$ we can define the contraction by $C : \alpha\otimes X \mapsto \alpha(X)$ and extend by linearity. On a tensor with more factors, for example $\alpha_1 \otimes \alpha_2 \otimes X_1 \otimes X_2$ we have one contraction for each pair of one one-form and one vector field. For example, $$C^1_2 : \alpha_1 \otimes \alpha_2 \otimes X_1 \otimes X_2 \mapsto \alpha_1(X_2) \alpha_2 \otimes X_1.$$

Now suppose that dual bases of vector fields and one-forms are given. With respect to these any tensor can be written $$T = \sum T^{a_1 a_2 \cdots}{}_{r_1 r_2 \cdots} \alpha^{r_1} \otimes \alpha^{r_2} \otimes \cdots \otimes X_{a_1} \otimes X_{a_2} \otimes \cdots$$ where $$\alpha^i(X_j) = \begin{cases} 1 & i = j \\ 0 & i \neq j \end{cases}.$$ Using the previous you can convince yourself that the components of (the $(1,1)$) contraction are indeed $$T^{i a_2 \cdots}{}_{i r_2 \cdots}$$ as in Einstein notation.

Now take a vector field $X$. It is a tensor so it has components $X^\mu$. We can take the tensor product and contraction $C^2_1 (g \otimes X)$, which in components is $g_{\mu\nu} X^\nu.$ Thus $g$ defines a linear map from vector fields to one-forms. Likewise a $2 \choose 0$ tensor will define a map from one-forms to vector fields by $\alpha_\nu \mapsto h^{\nu\mu}\alpha_\mu$. The composition of these maps is a $(1,1)$ tensor, $$(h\circ g)^\mu{}_\nu = h^{\mu\rho} g_{\rho\nu}.$$ Repeating the argument, a $(1,1)$ tensor can be seen as a linear map from vector fields to vector fields. Hence it makes sense to consider the equation for $h$, $$h^{\mu\rho}g_{\rho\nu} = 1$$ where $1$ is the identity tensor, the solution of which is the inverse of $g$.

As you can see we can think about tensors as (multi-)linear maps. The components of a composition are given by the same expression as the element of a matrix product. This is not surprising. Matrix multiplication is an interesting operation precisely because it is what gives components of compositions of linear maps. Any linear algebra book that gives you matrices before talking about linear maps is doing it in the wrong order.

Related Question