General Relativity – A Helpful Proof in Contracting the Christoffel Symbol

differential-geometrygeneral-relativityhomework-and-exercisesmetric-tensor

Out of all of my time learning General relativity, this is the one identity that I cannot get around.
$$ \Gamma_{\alpha \beta}^{\alpha} = \partial_{\beta}\ln\sqrt{-g} \tag{1}$$
where $g$ is the determinant of the metric tensor $g_{\alpha \beta}$.

With the Christoffel symbol, we start by contracting

$$ \begin{align}
\Gamma_{\alpha \beta}^{\alpha} &= \frac{1}{2} g^{\alpha\gamma} (\partial_{\alpha} g_{\beta\gamma} + \partial_{\beta} g_{\alpha\gamma} – \partial_{\gamma} g_{\alpha\beta} ) \\
&= \frac{1}{2} g^{\alpha\alpha} ( \partial_{\beta} g_{\alpha\alpha}) \\
&= \frac{1}{2g_{\alpha\alpha}} ( \partial_{\beta} g_{\alpha\alpha})
\end{align}\tag{2}$$

where I took $\gamma \rightarrow \alpha$ and $g^{\alpha\alpha} = 1/g_{\alpha\alpha}$.

The next steps to take now, I have no clue. MTW gives a hint by saying to use the results from some exercise, which are,

$$\det A = \det||A^{\lambda}_{\ \ \rho}|| = \tilde{\epsilon}^{\alpha\beta\gamma\delta}A^{0}_{\ \ \alpha}A^{1}_{\ \ \beta}A^{2}_{\ \ \gamma}A^{3}_{\ \ \delta} $$

$$(A^{-1})^{\mu}_{\ \ \alpha}(\det A) = \frac{1}{3!}\delta_{\alpha\beta\gamma\delta}^{\mu\nu\rho\sigma} A^{\beta}_{\ \ \nu} A^{\gamma}_{\ \ \rho}A^{\delta}_{\ \ \sigma} $$

$$ \mathbf{d}\ln|\det A| = \mathrm{trace}(A^{-1}\mathbf{d}A) ,\tag{3}$$
where $\mathbf{d}A$ is the matrix $||\mathbf{d}A^{\alpha}_{\ \ \mu}||$ whose entries are one-forms.

I fail to reason why the metric turns into the determinant from what I have done and then becomes the result at the top.

Best Answer

Recall the matrix identity $$\tag{1}\log\det M=\operatorname{tr}\log M.$$ If $M=M(\lambda)$ is differentiable in $\lambda$, then $$\tag{2}\frac{d}{d\lambda}\log\det M=\operatorname{tr}\left(M^{-1}\frac{d}{d\lambda} M\right).$$ The proof of $(1)$ for symmetric matrices follows from the usual formulae for the trace and determinant in terms of eigenvalues$^{1}$.

As for the Christoffels, we have $$\Gamma^i{}_{ij}=\frac{1}{2}g^{ik}(\partial_i g_{jk}+\partial_j g_{ik}-\partial_k g_{ij})=\frac{1}{2}g^{ik}\partial_j g_{ik}=\frac{1}{2}\operatorname{tr}(g^{-1} \partial_j g).$$ The last equality is just what the contraction of indices means for the (symmetric!) matrix $g=(g_{ij})$, and there is an error in the indices in OP's post. Now, using $(2)$ we have $$\Gamma^i{}_{ij}=\frac{1}{2}\partial_j\log \det g.$$ This can be brought into the form $$\Gamma^i{}_{ij}=\partial_j \log\sqrt{|\det g|}$$ by the usual rules of calculus.


$^{1}$ For symmetric matrices, such as $g$, it is easy because $g$ can be diagonalized. For other matrices you might need a Jordan normal form to compute $\log M$.