Hodge star operator : inner product of $k$-forms independent of orthogonal frames

differential-geometryexterior-algebralinear algebramanifoldsmatrices

I'm trying to work out John Lee's Introduction to Smooth Manifolds problem 16-18 about the definition of Hodge star operator, but mainly subproblem (a), I can prove the rest of them assuming the property of (a).

Problem (a) asks you to prove that for each $k=1,\cdots n$, the Riemannian metric $g$ uniquely determines an inner product on $\Lambda^k T^*_p M$ which satisfies

$$\langle \omega^1 \wedge \cdots \wedge \omega^k, \tau^1 \wedge \cdots\tau^k \rangle=\mathrm{det}(\langle(\omega^i)^\sharp,(\tau^j)^\sharp\rangle)$$

where $\omega^1, \cdots, \omega^k,\tau^1, \cdots, \tau^k$ are covectors at $p$. The hint is to define inner product locally by declaring $\{\varepsilon^I|_p :I$ is increasing$\}$ to be an orthonormal frame, whenever $(\varepsilon^i)$ is the coframe dual to a local orthonormal frame.

Here is what I tried following the hint: Every $k$-form locally can be uniquely written as $\alpha=\sum_{I \nearrow}a_I\varepsilon^I$ where $I$ is an increasing multi-index of length $k$. If we have another $k$-form $\beta=\sum_{I \nearrow}b_I\varepsilon^I$, we define $$\langle\alpha,\beta\rangle=\sum_{I \nearrow}a_Ib_I$$ We must prove the above definition is independent of the choice of coframe: If $(\varepsilon^i)$ and $(\eta^i)$ are two coframes, each of which is dual to an orthonormal frame, $\alpha=\sum_{I \nearrow}a_I\varepsilon^I=\sum_{I \nearrow}a_I^\prime\eta^I$, $\beta=\sum_{I \nearrow}b_I\varepsilon^I=\sum_{I \nearrow}b_I^\prime\eta^I$, then $\sum_{I \nearrow}a_Ib_I=\sum_{I \nearrow}a_I^\prime b_I^\prime$. Suppose $$(\varepsilon^1,\cdots,\varepsilon^n)=(\eta^1,\cdots,\eta^n)\begin{pmatrix} q^1_1 & \cdots
& q^n_1 \\ \vdots
& &\vdots \\q^1_n &\cdots &q^n_n\end{pmatrix}$$

where the matrix $Q=(q^j_i)$ is orthogonal, therefore $\varepsilon^k=\eta^i q^k_i$. Substituting $\varepsilon=\varepsilon^{i_1}\wedge\cdots\wedge\varepsilon^{i_k}$, we have

$$\alpha = \sum_{I \nearrow}a_I^\prime\eta^I=\sum_{I \nearrow}a_I(\sum^n_{j=1}\eta^j q^{i_1}_j\wedge\cdots \wedge \sum^n_{j=1}\eta^j q^{i_k}_j)$$
Fix an multi-index $L=(l_1,\cdots, l_k)$ and compare the coefficients of $\eta^L$, we have
$$a_L^\prime=(\sum_{I \nearrow}a_I \sum_{\sigma \in S_k}(-1)^{\mathrm{sgn}\sigma}q^{i_{\sigma(1)}}_{l_1}\cdots q^{i_{\sigma(k)}}_{l_k})=\sum_{I \nearrow}a_I Q(L,I)$$
where $Q(L,I)$ denotes the minor of $Q$ of row $L$ and column $I$. By the same argument we have $b_L^\prime=\sum_{I \nearrow}b_I Q(L,I)$. So finally what we have to prove is (if I did't make calculation mistakes) given an orthogonal matrix $Q$ and real numbers $a_I,b_I$,
$$\sum_{I \nearrow} a_I b_I=\sum_{(I \nearrow}(\sum_{J \nearrow}a_J Q(I,J))(\sum_{J \nearrow}b_J Q(I,J)))$$
which seems like a tricky linear algebra problem. Do you have any ideas about this, or just give another approach. Thanks in advance.

Best Answer

$ \newcommand\Ext{{\textstyle\bigwedge}} \newcommand\trans[1]{#1^{\mathrm T}} \newcommand\form[1]{\langle#1\rangle} \newcommand\K{\mathbb K} \newcommand\Tensor{{\textstyle\bigotimes}} \newcommand\tensor\otimes \newcommand\lintr{\mathbin\lrcorner} \newcommand\End{\mathrm{End}} \newcommand\EndOp{\End_{\mathrm{op}}} $

I'm going to start with demonstrating how my comment gives the desired result, then prove the key formula, then give an abstract approach to constructing the inner product on $\Ext V$ that doesn't involve any coordinates or bases.

Generalized Cauchy-Binet Formula

The key result in your notation is $$ (AB)(I, K) = \sum_{J\nearrow} A(I, J)B(J, K), $$ from which it follows that $$ \sum_{I\nearrow} Q(I, J)Q(I, K) = \sum_{J\nearrow} \trans Q(J, I)Q(I, K) = (\trans QQ)(J, K) = \delta_{J,K}, $$ so the RHS of your last equation becomes $$\begin{aligned} &\sum_{I\nearrow}\left(\sum_{J\nearrow}a_JQ(I, J)\right) \left(\sum_{J\nearrow}b_JQ(I, J)\right) \\ &\qquad= \sum_{J\nearrow}\sum_{K\nearrow}a_Jb_K\sum_{I\nearrow}Q(I, J)Q(I, K) \\ &\qquad= \sum_{J\nearrow}\sum_{K\nearrow}a_Jb_K\delta_{J,K} \\ &\qquad= \sum_{J\nearrow}a_Jb_J. \end{aligned}$$

Proof

From here on, we will only talk about the exterior algebra over an $n$-dimensional vector space $V$ with field of scalars $\K$ equipped with a symmetric bilinear form $\form{\cdot,\cdot} : V\times V \to \K$; the Riemannian structure plays no part in this beyond giving us this form on the tangent space, i.e. the metric.

The exterior algebra $\Ext V$ can be defined as the unique-up-to-isomorphism associative algebra with inclusion $V \to \Ext V$ that has the following universal property: for any associative algebra $A$ and linear $f : V \to A$ such that $f(v)^2 = 0$ for all $v \in V$, there is a unique algebra homomorphism $g : \Ext V \to A$ such that $g(v) = f(v)$ for $v \in V$. In other words, $f$ uniquely lifts to a homomorphism on $\Ext V$.

Given any linear $f : V \to V$, we can widen the codomain and consider it as a map $V \to \Ext V$, and trivially $f(v)\wedge f(v) = 0$. So there is a unique extension of $f$ to a homomorphism $\Ext V \to \Ext V$, the outermorphism $f_\wedge$. Since they take vectors to vectors, outermorphisms are grade-preserving. The space $\Ext^{\!n} V$ is one-dimensional, so there is a unique scalar, the determinant $\det(f)$, such that $$ f_\wedge(I) = (\det f)I,\quad I \in \Ext^{\!n} V. $$ Every simple multivector $J$ uniquely determines a subspace $[J] \subseteq V$ by $$ v \in [J] \iff v \wedge J = 0. $$ Given simple $m$-vectors $J, K$, there are orthogonal projections $P_J, P_K$ onto $[J], [K]$; the $(J,K)$-minor $\det_{J,K}(f)$ of $f$ is the unique scalar such that $$ (P_J\circ f\circ P_K)_\wedge(K) = (\det_{J,K}f)J. $$

You should convince yourself that the minors of the matrix of $f$ in an orthonormal basis $\{e_i\}$ are exactly the $(J,K)$-minors where $$ J = e_1\wedge\cdots\wedge e_{j-1}\wedge e_{j+1}\wedge\cdots e_n,\quad K = e_1\wedge\cdots\wedge e_{k-1}\wedge e_{k+1}\wedge\cdots e_n $$ for each $j, k$. In fact, if $J = e_{j_1}\wedge\cdots\wedge e_{j_m}$ and $K = e_{k_1}\wedge\cdots\wedge e_{k_m}$, then $\det_{J,K}(f)$ is the determinant of the matrix with entries $(f_{j_a,k_b})_{a,b=1}^m$.

It is now straight-forward to prove the generalized Cauchy-Binet formula. Let $J, L$ be simple $m$-vectors, and suppose $U = [K_1]\oplus\cdots\oplus[K_k] \subseteq V$ is an $m$-dimensional subspace with $K_1,\dotsc, K_k$ simple $m$-vectors and $[K_1], \dotsc, [K_k]$ mutually orthogonal. Then for any $g : V \to U$ and $f : U \to V$ $$\begin{aligned} \Bigl[(P_J)_\wedge\circ f_\wedge\circ g_\wedge\circ(P_L)_\wedge\Bigr](L) &= [(P_J)_\wedge\circ f_\wedge]\left(\sum_{i=1}^k K_i\det_{K_i,L}g\right) \\ &= \sum_{i=1}^k [(P_J)_\wedge\circ f_\wedge\circ (P_{K_i})_\wedge](K_i)\det_{K_i,L}g \\ &= J\sum_{i=1}^k(\det_{J,K_i}f)(\det_{K_i,L}g), \end{aligned}$$ where we've used the fact the outermorphism of a composition is the composition of the outer morphisms, and that $K_i = (P_{K_i})_\wedge(K_i)$. The first equality follows since $(P_U)_\wedge = \sum_{i=1}^k(P_{K_i})_\wedge$ (which is specifically a result of the $[K_i]$ be mutually orthogonal). Hence $$ \det_{J,L}f\circ g = \sum_{i=1}^k(\det_{J,K_i}f)(\det_{K_i,L}g). $$

The Natural Pairing $\Ext V^*\times\Ext V$

Let $\Tensor V$ denote the tensor algebra of $V$. It has the universal property that every linear map $V \to A$ with $A$ an associative algebra extends to a homomorphism $\Tensor V \to A$. We will not distinguish the original map from the extended map.

Let $\omega : V \to V$ be a linear involution; considered as map into $\Tensor V$, it extends to an algebra involution on $\Tensor V$. Every linear $f : V \to \Tensor V$ then defines a unique $\omega$-derivation $\Omega_f : \Tensor V \to \Tensor V$ by $$ \Omega_f(1) = 0,\quad \Omega_f(v) = f(v),\quad \Omega_f(v\tensor X) = f(v)\tensor X + \omega(v)\tensor \Omega_f(X) $$ for $v \in V$ and $X \in \Tensor^kV$. It can be confirmed that this gives a well defined map, and that it is indeed a derivation: $$ \Omega_f(X\tensor Y) = \Omega_f(X)\tensor Y + \omega(X)\tensor\Omega_f(Y). $$

The exterior algebra can be realized as a quotient $\Ext V = \Tensor V/I$ where $I$ is the two-sided ideal generated by $\{v\tensor v \;:\; v \in V\}$. As such, every map $f : V \to \Ext V$ lifts to a (non-unique) map $f' : V \to \Tensor V$, which gives a unique derivation $\Omega_{f'}$. It can be confirmed that $\Omega_{f'}$ maps $I$ to $I$ if $$ \Omega_{f'}(v\tensor v) = f'(v)\tensor v + \omega(v)\tensor f'(v) \in I $$ for all $v \in V$, and this condition descends to $\Ext V$ as $$ f(v)\wedge v + \omega(v)\wedge f(v) = 0. $$ Under this condition, $f$ extends to a unique $\omega$-derivation on $\Ext V$.

Now, for any $\alpha \in V^*$ define $\alpha\lintr v = \alpha(v)$. We choose $\omega(v) = -v$ and confirm $$ (\alpha\lintr v)\wedge v + (-v)\wedge(\alpha\lintr v) = (\alpha\lintr v)v - (\alpha\lintr v)v = 0. $$ So $\alpha\lintr$ extends to an antiderivation; note that it takes grade $k$ to grade $k-1$. So now we have a map $$ \alpha \mapsto \alpha\lintr : V^* \to \End(\Ext V) $$ into the linear endomorphisms of $\Ext V$. Here we have a choice; we can take this as is, or map it into the opposite algebra $\EndOp(\Ext V)$ (which is $\End(\Ext V)$ with its multiplication reversed). It turns out the inner product you want corresponds to the $\EndOp(\Ext V)$ choice. We can confirm that $$ [(\alpha\lintr)\circ(\alpha\lintr)](X) = \alpha\lintr(\alpha\lintr X) = 0 $$ for all $X$, and so $\alpha \mapsto \alpha\lintr$ extends to a homomorphism $\Ext V^* \to \EndOp(\Ext V)$, i.e. an antihomomorphism $\Ext V^* \to \End(\Ext V)$ which acts like a homomorphism but reverses products. This property implies $$ \omega\lintr X \in \Ext^{\!l-k}V \quad\text{for}\quad \omega \in \Ext^{\!k} V^*,\quad X \in \Ext^{\!l} V. $$

We finally define the natural pairing $\Ext V^*\times\Ext V \to \K$ by $$ \form{\omega, X} = \form{\omega\lintr X}_0 $$ where $\form\cdot_0$ selects the scalar component. For $\omega^1,\dotsc,\omega^k \in V^*$ and $v_1,\dotsc,v_k \in V$, we can use the fact that $\omega^i\lintr$ is an antiderivation to show that $$\begin{aligned} \form{\omega^1\wedge\cdots\wedge\omega^k,\: v_1\wedge\cdots\wedge v_k} &= (\omega^1\wedge\cdots\wedge\omega^k)\lintr(v_1\wedge\cdots\wedge v_k) \\ &= \omega^k\lintr\omega^{k-1}\lintr\cdots\lintr\omega^1\lintr(v_1\wedge\cdots\wedge v_k) \\ &= \det\bigl(\omega^i(v_j)\bigr)_{i,j=1}^k, \end{aligned}$$ where we've used the convention that $\lintr$ is right-associative. The musical isomorphism $\sharp, \flat$ extend to an isomorphism between $\Ext V^*$ and $\Ext V$, and allow us to transfer this pairing to those spaces: $$ \form{X, Y} = \form{X^\flat\lintr Y}_0,\quad \form{\omega, \eta} = \form{\omega\lintr\eta^\sharp}. $$

Related Question