Why Use Geometric Algebra Instead of Differential Forms? – Differential Geometry

differential-formsdifferential-geometrygeometric-algebrassoft-questiontensors

This is somewhat similar to Are Clifford algebras and differential forms equivalent frameworks for differential geometry?, but I want to restrict discussion to $\mathbb{R}^n$, not arbitrary manifolds.

Moreover, I am interested specifically in whether

$$(\text{differential forms on }\mathbb{R}^n\text{ + a notion of inner product defined on them}) \simeq \text{geometric algebra over }\mathbb{R}^n$$

where the isomorphism is as Clifford algebras. (I.e., is geometric algebra just the description of the algebraic properties of differential forms when endowed with a suitable notion of inner product?)

1. Is any geometric algebra over $\mathbb{R}^n$ isomorphic to the exterior algebra over $\mathbb{R}^n$ in the following senses:

(Obviously they are not isomorphic as Clifford algebras unless our quadratic form is the zero quadratic form.)

Since the basis of the geometric algebra (as a vector space) is the same (or at least isomorphic to) the basis of the exterior algebra over $\mathbb{R}^n$, the answer seems to be yes. Also because the standard embedding of any geometric algebra over $\mathbb{R}^n$ into the tensor algebra over $\mathbb{R}^n$ always "piggybacks" on the embedding of the exterior algebra over $\mathbb{R}^n$, see this MathOverflow question.

2. Are differential forms the standard construction of an object satisfying the algebraic properties of the exterior algebra over $\mathbb{R}^n$?

3. Does the answers to 1. and 2. being yes imply that the part in yellow is true?

EDIT: It seems like the only problem might be that differential forms are covariant tensors, whereas I imagine that multivectors are generally assumed to be contravariant. However, distinguishing between co- and contravariant tensors is a standard issue in tensor analysis, so this doesn't really seem like an important issue to me.

Assuming that I am reading this correctly, it seems like the elementary construction of the geometric algebra with respect to the standard inner product over $\mathbb{R}^n$ given by Alan MacDonald here is exactly just the exterior algebra over $\mathbb{R}^n$ with inner product.

David Hestenes seems to try and explain some of this somewhat here and here, although I don't quite understand what he is getting at.

(Also his claim in the first document that matrix algebra is subsumed by geometric algebra seems completely false, since he only addresses those aspects which relate to alternating tensors.)

Best Answer

I just want to point out that GA can be used to make covariant multivectors (or differential forms) on $\mathbb R^n$ without forcing a metric onto it. In other words, the distinction between vectors and covectors (or between $\mathbb R^n$ and its dual) can be maintained.


This is done with a pseudo-Euclidean space $\mathbb R^{n,n}$.

Take an orthonormal set of spacelike vectors $\{\sigma_i\}$ (which square to ${^+}1$) and timelike vectors $\{\tau_i\}$ (which square to ${^-}1$). Define null vectors

$$\Big\{\nu_i=\frac{\sigma_i+\tau_i}{\sqrt2}\Big\}$$

$$\Big\{\mu_i=\frac{\sigma_i-\tau_i}{\sqrt2}\Big\};$$

they're null because

$${\nu_i}^2=\frac{{\sigma_i}^2+2\sigma_i\cdot\tau_i+{\tau_i}^2}{2}=\frac{(1)+2(0)+({^-}1)}{2}=0$$

$${\mu_i}^2=\frac{{\sigma_i}^2-2\sigma_i\cdot\tau_i+{\tau_i}^2}{2}=\frac{(1)-2(0)+({^-}1)}{2}=0.$$

More generally,

$$\nu_i\cdot\nu_j=\frac{\sigma_i\cdot\sigma_j+\sigma_i\cdot\tau_j+\tau_i\cdot\sigma_j+\tau_i\cdot\tau_j}{2}=\frac{(\delta_{i,j})+0+0+({^-}\delta_{i,j})}{2}=0$$

and

$$\mu_i\cdot\mu_j=0.$$

So the spaces spanned by $\{\nu_i\}$ or $\{\mu_i\}$ each have degenerate quadratic forms. But the dot product between them is non-degenerate:

$$\nu_i\cdot\mu_i=\frac{\sigma_i\cdot\sigma_i-\sigma_i\cdot\tau_i+\tau_i\cdot\sigma_i-\tau_i\cdot\tau_i}{2}=\frac{(1)-0+0-({^-}1)}{2}=1$$

$$\nu_i\cdot\mu_j=\frac{\sigma_i\cdot\sigma_j-\sigma_i\cdot\tau_j+\tau_i\cdot\sigma_j-\tau_i\cdot\tau_j}{2}=\frac{(\delta_{i,j})-0+0-({^-}\delta_{i,j})}{2}=\delta_{i,j}$$

Of course, we could have just started with the definition that $\mu_i\cdot\nu_j=\delta_{i,j}=\nu_i\cdot\mu_j$, and $\nu_i\cdot\nu_j=0=\mu_i\cdot\mu_j$, instead of going through "spacetime".


The space $V$ will be generated by $\{\nu_i\}$, and its dual $V^*$ by $\{\mu_i=\nu^i\}$. You can take the dot product of something in $V^*$ with something in $V$, which will be a differential 1-form. You can make contravariant multivectors from wedge products of things in $V$, and covariant multivectors from wedge products of things in $V^*$.

You can also take the wedge product of something in $V^*$ with something in $V$.

$$\mu_i\wedge\nu_i=\frac{\sigma_i\wedge\sigma_i+\sigma_i\wedge\tau_i-\tau_i\wedge\sigma_i-\tau_i\wedge\tau_i}{2}=\frac{0+\sigma_i\tau_i-\tau_i\sigma_i-0}{2}=\sigma_i\wedge\tau_i$$

$$\mu_i\wedge\nu_j=\frac{\sigma_i\sigma_j+\sigma_i\tau_j-\tau_i\sigma_j-\tau_i\tau_j}{2},\quad i\neq j$$

What does this mean? ...I suppose it could be a matrix (a mixed variance tensor)!


A matrix can be defined as a bivector:

$$M = \sum_{i,j} M^i\!_j\;\nu_i\wedge\mu_j = \sum_{i,j} M^i\!_j\;\nu_i\wedge\nu^j$$

where each $M^i_j$ is a scalar. Note that $(\nu_i\wedge\mu_j)\neq{^-}(\nu_j\wedge\mu_i)$, so $M$ is not necessarily antisymmetric. The corresponding linear function $f:V\to V$ is (with $\cdot$ the "fat dot product")

$$f(x) = M\cdot x = \frac{Mx-xM}{2}$$

$$= \sum_{i,j} M^i_j(\nu_i\wedge\mu_j)\cdot\sum_k x^k\nu_k$$

$$= \sum_{i,j,k} M^i_jx^k\frac{\nu_i\mu_j-\mu_j\nu_i}{2}\cdot\nu_k$$

$$= \sum_{i,j,k} M^i_jx^k\frac{(\nu_i\mu_j)\nu_k-\nu_k(\nu_i\mu_j)-(\mu_j\nu_i)\nu_k+\nu_k(\mu_j\nu_i)}{4}$$

(the $\nu$'s anticommute because their dot product is zero:)

$$= \sum_{i,j,k} M^i_jx^k\frac{\nu_i\mu_j\nu_k+\nu_i\nu_k\mu_j+\mu_j\nu_k\nu_i+\nu_k\mu_j\nu_i}{4}$$

$$= \sum_{i,j,k} M^i_jx^k\frac{\nu_i(\mu_j\nu_k+\nu_k\mu_j)+(\mu_j\nu_k+\nu_k\mu_j)\nu_i}{4}$$

$$= \sum_{i,j,k} M^i_jx^k\frac{\nu_i(\mu_j\cdot\nu_k)+(\mu_j\cdot\nu_k)\nu_i}{2}$$

$$= \sum_{i,j,k} M^i_jx^k\frac{\nu_i(\delta_{j,k})+(\delta_{j,k})\nu_i}{2}$$

$$= \sum_{i,j,k} M^i_jx^k\big(\delta_{j,k}\nu_i\big)$$

$$= \sum_{i,j} M^i_jx^j\nu_i$$

This agrees with the conventional definition of matrix multiplication.

In fact, it even works for non-square matrices; the above calculations work the same if the $\nu_i$'s on the left in $M$ are basis vectors for a different space. A bonus is that it also works for a non-degenerate quadratic form; the calculations don't rely on ${\mu_i}^2=0$, nor ${\nu_i}^2=0$, but only on $\nu_i$ being orthogonal to $\nu_k$, and $\mu_j$ being reciprocal to $\nu_k$. So you could instead have $\mu_j$ (the right factors in $M$) be in the same space as $\nu_k$ (the generators of $x$), and $\nu_i$ (the left factors in $M$) in a different space. A downside is that it won't map a non-degenerate space to itself.

I admit that this is worse than the standard matrix algebra; the dot product is not invertible, nor associative. Still, it's good to have this connection between the different algebras. And it's interesting to think of a matrix as a bivector that "rotates" a vector through the dual space and back to a different point in the original space (or a new space).


Speaking of matrix transformations, I should discuss the underlying principle for "contra/co variance": that the basis vectors may vary.

We want to be able to take any (invertible) linear transformation of the null space $V$, and expect that the opposite transformation applies to $V^*$. Arbitrary linear transformations of the external $\mathbb R^{n,n}$ will not preserve $V$; the transformed $\nu_i$ may not be null. It suffices to consider transformations that preserve the dot product on $\mathbb R^{n,n}$. One obvious type is the hyperbolic rotation

$$\sigma_1\mapsto\sigma_1\cosh\phi+\tau_1\sinh\phi={\sigma_1}'$$

$$\tau_1\mapsto\sigma_1\sinh\phi+\tau_1\cosh\phi={\tau_1}'$$

$$\sigma_2={\sigma_2}',\quad\sigma_3={\sigma_3}',\quad\cdots$$

$$\tau_2={\tau_2}',\quad\tau_3={\tau_3}',\quad\cdots$$

(or, more compactly, $x\mapsto\exp(-\sigma_1\tau_1\phi/2)x\exp(\sigma_1\tau_1\phi/2)$ ).

The induced transformation of the null vectors is

$${\nu_1}'=\frac{{\sigma_1}'+{\tau_1}'}{\sqrt2}=\exp(\phi)\nu_1$$

$${\mu_1}'=\frac{{\sigma_1}'-{\tau_1}'}{\sqrt2}=\exp(-\phi)\mu_1$$

$${\nu_2}'=\nu_2,\quad{\nu_3}'=\nu_3,\quad\cdots$$

$${\mu_2}'=\mu_2,\quad{\mu_3}'=\mu_3,\quad\cdots$$

The vector $\nu_1$ is multiplied by some positive number $e^\phi$, and the covector $\mu_1$ is divided by the same number. The dot product is still ${\mu_1}'\cdot{\nu_1}'=1$.

You can get a negative multiplier for $\nu_1$ simply by the inversion $\sigma_1\mapsto{^-}\sigma_1,\quad\tau_1\mapsto{^-}\tau_1$; this will also negate $\mu_1$. The result is that you can multiply $\nu_1$ by any non-zero Real number, and $\mu_1$ will be divided by the same number.

Of course, this only varies one basis vector in one direction. You could try to rotate the vectors, but a simple rotation in a $\sigma_i\sigma_j$ plane will mix $V$ and $V^*$ together. This problem is solved by an isoclinic rotation in $\sigma_i\sigma_j$ and $\tau_i\tau_j$, which causes the same rotation in $\nu_i\nu_j$ and $\mu_i\mu_j$ (while keeping them separate).

Combine these stretches, reflections, and rotations, and you can generate any invertible linear transformation on $V$, all while maintaining the degeneracy ${\nu_i}^2=0$ and the duality $\mu_i\cdot\nu_j=\delta_{i,j}$. This shows that $V$ and $V^*$ do have the correct "variance".


See also Hestenes' Tutorial, page 5 ("Quadratic forms vs contractions").


Related Question