I've studied what I think is geometric algebra, but can't seem to understand the difference between it and exterior and multilinear algebra. And is it linked to Clifford and Grassmann algebras in any way?
[Math] What’s the difference between geometric, exterior and multilinear algebra
abstract-algebraclifford-algebrasexterior-algebrageometric-algebrasmultilinear-algebra
Related Solutions
I just want to point out that GA can be used to make covariant multivectors (or differential forms) on $\mathbb R^n$ without forcing a metric onto it. In other words, the distinction between vectors and covectors (or between $\mathbb R^n$ and its dual) can be maintained.
This is done with a pseudo-Euclidean space $\mathbb R^{n,n}$.
Take an orthonormal set of spacelike vectors $\{\sigma_i\}$ (which square to ${^+}1$) and timelike vectors $\{\tau_i\}$ (which square to ${^-}1$). Define null vectors
$$\Big\{\nu_i=\frac{\sigma_i+\tau_i}{\sqrt2}\Big\}$$
$$\Big\{\mu_i=\frac{\sigma_i-\tau_i}{\sqrt2}\Big\};$$
they're null because
$${\nu_i}^2=\frac{{\sigma_i}^2+2\sigma_i\cdot\tau_i+{\tau_i}^2}{2}=\frac{(1)+2(0)+({^-}1)}{2}=0$$
$${\mu_i}^2=\frac{{\sigma_i}^2-2\sigma_i\cdot\tau_i+{\tau_i}^2}{2}=\frac{(1)-2(0)+({^-}1)}{2}=0.$$
More generally,
$$\nu_i\cdot\nu_j=\frac{\sigma_i\cdot\sigma_j+\sigma_i\cdot\tau_j+\tau_i\cdot\sigma_j+\tau_i\cdot\tau_j}{2}=\frac{(\delta_{i,j})+0+0+({^-}\delta_{i,j})}{2}=0$$
and
$$\mu_i\cdot\mu_j=0.$$
So the spaces spanned by $\{\nu_i\}$ or $\{\mu_i\}$ each have degenerate quadratic forms. But the dot product between them is non-degenerate:
$$\nu_i\cdot\mu_i=\frac{\sigma_i\cdot\sigma_i-\sigma_i\cdot\tau_i+\tau_i\cdot\sigma_i-\tau_i\cdot\tau_i}{2}=\frac{(1)-0+0-({^-}1)}{2}=1$$
$$\nu_i\cdot\mu_j=\frac{\sigma_i\cdot\sigma_j-\sigma_i\cdot\tau_j+\tau_i\cdot\sigma_j-\tau_i\cdot\tau_j}{2}=\frac{(\delta_{i,j})-0+0-({^-}\delta_{i,j})}{2}=\delta_{i,j}$$
Of course, we could have just started with the definition that $\mu_i\cdot\nu_j=\delta_{i,j}=\nu_i\cdot\mu_j$, and $\nu_i\cdot\nu_j=0=\mu_i\cdot\mu_j$, instead of going through "spacetime".
The space $V$ will be generated by $\{\nu_i\}$, and its dual $V^*$ by $\{\mu_i=\nu^i\}$. You can take the dot product of something in $V^*$ with something in $V$, which will be a differential 1-form. You can make contravariant multivectors from wedge products of things in $V$, and covariant multivectors from wedge products of things in $V^*$.
You can also take the wedge product of something in $V^*$ with something in $V$.
$$\mu_i\wedge\nu_i=\frac{\sigma_i\wedge\sigma_i+\sigma_i\wedge\tau_i-\tau_i\wedge\sigma_i-\tau_i\wedge\tau_i}{2}=\frac{0+\sigma_i\tau_i-\tau_i\sigma_i-0}{2}=\sigma_i\wedge\tau_i$$
$$\mu_i\wedge\nu_j=\frac{\sigma_i\sigma_j+\sigma_i\tau_j-\tau_i\sigma_j-\tau_i\tau_j}{2},\quad i\neq j$$
What does this mean? ...I suppose it could be a matrix (a mixed variance tensor)!
A matrix can be defined as a bivector:
$$M = \sum_{i,j} M^i\!_j\;\nu_i\wedge\mu_j = \sum_{i,j} M^i\!_j\;\nu_i\wedge\nu^j$$
where each $M^i_j$ is a scalar. Note that $(\nu_i\wedge\mu_j)\neq{^-}(\nu_j\wedge\mu_i)$, so $M$ is not necessarily antisymmetric. The corresponding linear function $f:V\to V$ is (with $\cdot$ the "fat dot product")
$$f(x) = M\cdot x = \frac{Mx-xM}{2}$$
$$= \sum_{i,j} M^i_j(\nu_i\wedge\mu_j)\cdot\sum_k x^k\nu_k$$
$$= \sum_{i,j,k} M^i_jx^k\frac{\nu_i\mu_j-\mu_j\nu_i}{2}\cdot\nu_k$$
$$= \sum_{i,j,k} M^i_jx^k\frac{(\nu_i\mu_j)\nu_k-\nu_k(\nu_i\mu_j)-(\mu_j\nu_i)\nu_k+\nu_k(\mu_j\nu_i)}{4}$$
(the $\nu$'s anticommute because their dot product is zero:)
$$= \sum_{i,j,k} M^i_jx^k\frac{\nu_i\mu_j\nu_k+\nu_i\nu_k\mu_j+\mu_j\nu_k\nu_i+\nu_k\mu_j\nu_i}{4}$$
$$= \sum_{i,j,k} M^i_jx^k\frac{\nu_i(\mu_j\nu_k+\nu_k\mu_j)+(\mu_j\nu_k+\nu_k\mu_j)\nu_i}{4}$$
$$= \sum_{i,j,k} M^i_jx^k\frac{\nu_i(\mu_j\cdot\nu_k)+(\mu_j\cdot\nu_k)\nu_i}{2}$$
$$= \sum_{i,j,k} M^i_jx^k\frac{\nu_i(\delta_{j,k})+(\delta_{j,k})\nu_i}{2}$$
$$= \sum_{i,j,k} M^i_jx^k\big(\delta_{j,k}\nu_i\big)$$
$$= \sum_{i,j} M^i_jx^j\nu_i$$
This agrees with the conventional definition of matrix multiplication.
In fact, it even works for non-square matrices; the above calculations work the same if the $\nu_i$'s on the left in $M$ are basis vectors for a different space. A bonus is that it also works for a non-degenerate quadratic form; the calculations don't rely on ${\mu_i}^2=0$, nor ${\nu_i}^2=0$, but only on $\nu_i$ being orthogonal to $\nu_k$, and $\mu_j$ being reciprocal to $\nu_k$. So you could instead have $\mu_j$ (the right factors in $M$) be in the same space as $\nu_k$ (the generators of $x$), and $\nu_i$ (the left factors in $M$) in a different space. A downside is that it won't map a non-degenerate space to itself.
I admit that this is worse than the standard matrix algebra; the dot product is not invertible, nor associative. Still, it's good to have this connection between the different algebras. And it's interesting to think of a matrix as a bivector that "rotates" a vector through the dual space and back to a different point in the original space (or a new space).
Speaking of matrix transformations, I should discuss the underlying principle for "contra/co variance": that the basis vectors may vary.
We want to be able to take any (invertible) linear transformation of the null space $V$, and expect that the opposite transformation applies to $V^*$. Arbitrary linear transformations of the external $\mathbb R^{n,n}$ will not preserve $V$; the transformed $\nu_i$ may not be null. It suffices to consider transformations that preserve the dot product on $\mathbb R^{n,n}$. One obvious type is the hyperbolic rotation
$$\sigma_1\mapsto\sigma_1\cosh\phi+\tau_1\sinh\phi={\sigma_1}'$$
$$\tau_1\mapsto\sigma_1\sinh\phi+\tau_1\cosh\phi={\tau_1}'$$
$$\sigma_2={\sigma_2}',\quad\sigma_3={\sigma_3}',\quad\cdots$$
$$\tau_2={\tau_2}',\quad\tau_3={\tau_3}',\quad\cdots$$
(or, more compactly, $x\mapsto\exp(-\sigma_1\tau_1\phi/2)x\exp(\sigma_1\tau_1\phi/2)$ ).
The induced transformation of the null vectors is
$${\nu_1}'=\frac{{\sigma_1}'+{\tau_1}'}{\sqrt2}=\exp(\phi)\nu_1$$
$${\mu_1}'=\frac{{\sigma_1}'-{\tau_1}'}{\sqrt2}=\exp(-\phi)\mu_1$$
$${\nu_2}'=\nu_2,\quad{\nu_3}'=\nu_3,\quad\cdots$$
$${\mu_2}'=\mu_2,\quad{\mu_3}'=\mu_3,\quad\cdots$$
The vector $\nu_1$ is multiplied by some positive number $e^\phi$, and the covector $\mu_1$ is divided by the same number. The dot product is still ${\mu_1}'\cdot{\nu_1}'=1$.
You can get a negative multiplier for $\nu_1$ simply by the inversion $\sigma_1\mapsto{^-}\sigma_1,\quad\tau_1\mapsto{^-}\tau_1$; this will also negate $\mu_1$. The result is that you can multiply $\nu_1$ by any non-zero Real number, and $\mu_1$ will be divided by the same number.
Of course, this only varies one basis vector in one direction. You could try to rotate the vectors, but a simple rotation in a $\sigma_i\sigma_j$ plane will mix $V$ and $V^*$ together. This problem is solved by an isoclinic rotation in $\sigma_i\sigma_j$ and $\tau_i\tau_j$, which causes the same rotation in $\nu_i\nu_j$ and $\mu_i\mu_j$ (while keeping them separate).
Combine these stretches, reflections, and rotations, and you can generate any invertible linear transformation on $V$, all while maintaining the degeneracy ${\nu_i}^2=0$ and the duality $\mu_i\cdot\nu_j=\delta_{i,j}$. This shows that $V$ and $V^*$ do have the correct "variance".
See also Hestenes' Tutorial, page 5 ("Quadratic forms vs contractions").
$\newcommand{\Cl}{\mathscr{Cl}(V)}$The problem with your definition of the wedge product in $\Cl$ is that it is an $n$-ary product of vectors. And so it's not clear to me how one would even discuss associativity of it (i.e. it doesn't allow one to distinguish between the product of a bivector and a vector vs the product of a vector and a bivector). So while $\Cl$ with the wedge product is an algebra -- it isn't an associative algebra like the exterior algebra is. Let me instead recommend a different definition based on grade projection (basically the same definition given in the works of Alan Macdonald):
To define the wedge product, we first define it in the case of single-grade elements. Let $A,B$ be an $r$-vector and $s$-vector, respectively. Then
$$A\wedge B := \langle AB\rangle_{r+s}$$
Now using this special case, we define the wedge product for all multivectors. Let $C,D$ be (potentially mixed grade) multivectors in $\Cl$. Then we define
$$C \wedge D := \sum_{j,k} \langle C\rangle_j \wedge \langle D\rangle_k$$
Proof that for all $A,B,C\in \Cl$, we have $A\wedge (B\wedge C) = (A\wedge B)\wedge C$:
Consider a $j$-vector $A$, $k$-vector $B$, and $l$-vector $C$. Then $$(A\wedge B)\wedge C = \langle AB\rangle_{j+k}\wedge C = \langle \langle AB\rangle_{j+k}C\rangle_{j+k+l} = \langle (AB)C\rangle_{j+k+l}$$ where that last step holds because only the \langle AB\rangle_{j+k} part of $AB$ can contribute to $\langle (AB)C\rangle_{j+k+l}$.$^\dagger$.
Then $$(A\wedge B)\wedge C = \langle (AB)C\rangle_{j+k+l} = \langle A(BC)\rangle_{j+k+l} = A\wedge\langle BC\rangle_{k+l} = A\wedge(B\wedge C)$$
To finish the proof by allowing $A,B,C$ to be any multivectors in $\Cl$, just note that our definition of the wedge product is bilinear.$^\ddagger$$\ \ \ \ \square$
Now we'll show that with this definition, your proposition holds. Take $\wedge_C$ to be the wedge product in $\Cl$ and $\wedge_T$ to be the wedge product in $\Lambda V$ (i.e. the coproduct of the antisymmetric tensor powers of $V$).
Let $\{e_1, \dots, e_n\}$ be an orthogonal basis for $V$ wrt the symmetric bilinear form and construct bases for $\Cl$ and $\Lambda V$ in the obvious way from it. Let $\Gamma: \Cl \to \Lambda V$ be given by $$\Gamma(A_0 + A_1e_1 + A_2e_2 + \cdots + A_{12\cdots n}e_1\wedge_C e_2\wedge_C \cdots \wedge_C e_n) = A_0 + A_1e_1 + A_2e_2 + \cdots + A_{12\cdots n}e_1\wedge_T e_2\wedge_T \cdots \wedge_T e_n$$
By definition this is linear. It's also clearly invertible. To see that it's multiplicative, it suffices (due to linearity of $\Gamma$ and bilinearity of $\wedge_C$) to show that $\Gamma$ is multiplicative on blades. Let $A=A_{i_1\cdots i_p}e_{i_1}\wedge_C\cdots\wedge_C e_{i_p}$ be a $p$-blade and $B=B_{j_1\cdots j_r}e_{j_1}\wedge_C\cdots\wedge_C e_{j_r}$ an $r$-vector. Then $$\begin{align}\Gamma(A\wedge_C B) &= \Gamma(A_{i_1\cdots i_p}e_{i_1}\wedge_C\cdots\wedge_C e_{i_p}\wedge_CB_{j_1\cdots j_r}e_{j_1}\wedge_C\cdots\wedge_C e_{j_r}) \\ &= A_{i_1\cdots i_p}e_{i_1}\wedge_T\cdots\wedge_T e_{i_p}\wedge_T(B_{j_1\cdots j_r}e_{j_1})\wedge_T\cdots\wedge_T e_{j_r} \\ &= (A_{i_1\cdots i_p}e_{i_1}\wedge_T\cdots\wedge_T e_{i_p})\wedge_T(B_{j_1\cdots j_r}e_{j_1}\wedge_T\cdots\wedge_T e_{j_r}) \\ &=\Gamma(A_{i_1\cdots i_p}e_{i_1}\wedge_C\cdots\wedge_C e_{i_p})\wedge_T\Gamma(B_{j_1\cdots j_r}e_{j_1}\wedge_C\cdots\wedge_C e_{j_r}) \\ &= \Gamma(A)\wedge_T\Gamma(B)\end{align}$$
A linear, invertible, multiplicative map is by definition an isomorphism of algebras. Hence $\Cl$ is isomorphic to $\Lambda V$, as desired.
Another Interesting Tidbit:
$\Lambda V$ is actually (or at least can be) defined by a set of axioms like other types of vector spaces.
From nlab:
Suppose $V$ is a vector space over a field $K$. Then the exterior algebra $\Lambda V$ is generated by the elements of $V$ using these operations:
- addition and scalar multiplication
- an associative binary operation $\wedge$ called the exterior product or wedge product,
subject to these identities:
- the identities necessary for $\Lambda V$ to be an associative algebra
- the identity $v\wedge v = 0$ for all $v\in V$.
It is easily confirmed that $\Cl$ satisfies these properties under the definition of $\wedge$ given above (and thus why associativity is so important for $\wedge$). Thus, not only can we construct an isomorphism, but we can even consider $\Cl$ to just be $\Lambda V$ with extra structure (the Clifford product).
$\dagger:$ To prove this, note that it is known that if $\Bbb F$ is not characteristic $2$, then $V$ has an orthogonal basis wrt any symmetric bilinear form. Then you can construct a "standard" basis for $\Cl$ and decompose your multivectors wrt to it.
$\ddagger:$ By the linearity of the grade projection operator.
Best Answer
Exterior algebra
Exterior algebra defines an antisymmetric wedge product. An example of the wedge product of two unit vectors, called a two-form, is
$$\mathbf{e}_1 \wedge \mathbf{e}_2 = -\mathbf{e}_2 \wedge \mathbf{e}_1.$$
An example of a wedge product of three (unit) vectors, a three-form, is
$$\begin{aligned}\mathbf{e}_1 \wedge \mathbf{e}_2 \wedge \mathbf{e}_3 &= -\mathbf{e}_2 \wedge \mathbf{e}_1 \wedge \mathbf{e}_3 \\ &= \mathbf{e}_2 \wedge \mathbf{e}_3 \wedge \mathbf{e}_1 \\ &= -\mathbf{e}_3 \wedge \mathbf{e}_2 \wedge \mathbf{e}_1.\end{aligned}$$
A consequence of this antisymmetry is that any wedge product where one of the wedged vectors is colinear with another is zero.
Exterior algebra also has the concept of duality, which provides a mapping between k-forms and N-k forms, where N is the dimension of the underlying vector space. For example, in a three dimensional Euclidean space the dual of the two form $ \mathbf{e}_1 \wedge \mathbf{e}_2 $, denoted $ *\left( { \mathbf{e}_1 \wedge \mathbf{e}_2} \right) $ is the quantity
$$*\left( {\mathbf{e}_1 \wedge \mathbf{e}_2} \right) \wedge \left( { \mathbf{e}_1 \wedge \mathbf{e}_2} \right) = \mathbf{e}_1 \wedge \mathbf{e}_2 \wedge \mathbf{e}_3,$$
so $$*\left( {\mathbf{e}_1 \wedge \mathbf{e}_2} \right) = \mathbf{e}_3.$$
I believe that Grassmann algebras have the same structure as exterior algebras, but also define a regressive product related to the exterior algebra dual.
Geometric algebra
In an exterior algebra, one can add k-forms to other k-forms, but would not add forms of different rank. This restriction is relaxed in geometric algebra (GA), where a quantity such as
$$1 + 2 \mathbf{e}_1 + 3 \mathbf{e}_2 \wedge \mathbf{e}_4 + 5 \mathbf{e}_1 \wedge \mathbf{e}_2 \wedge \mathbf{e}_4,$$
is perfectly well formed. The geometric algebra is built up of products of vectors, where the vector product is defined as an associative product
$$\mathbf{a} (\mathbf{b} \mathbf{c}) = (\mathbf{a} \mathbf{b}) \mathbf{c} = \mathbf{a} \mathbf{b} \mathbf{c},$$
and where the product of a vector with itself is defined as the squared length of that vector
$$\mathbf{a} \mathbf{a} = \mathbf{a} \cdot \mathbf{a} = \left\lvert {\mathbf{a}} \right\rvert^2.$$
In an Euclidean space such length is always positive, but that mixed sign length metrics (such as that of the Minkowski space used in special relativity) are also allowed.
The product of two non-colinear vectors can be factored as
$$\mathbf{a} \mathbf{b} = \frac{1}{{2}} \left( { \mathbf{a} \mathbf{b} + \mathbf{b} \mathbf{a} } \right) + \frac{1}{{2}} \left( { \mathbf{a} \mathbf{b} - \mathbf{b} \mathbf{a} } \right).$$
The first (symmetric) term can be identified with the dot-product, whereas the second completely antisymmetric term can be identified as with the wedge product, so this complete vector product is denoted
$$\mathbf{a} \mathbf{b} = \mathbf{a} \cdot \mathbf{b} + \mathbf{a} \wedge \mathbf{b}.$$
This is one of the simplest examples of what is called a multivector in GA, containing the sum of a scalar (grade zero) and a bivector (grade two).
There are a number of other consequences of the product axioms of GA. One such consequence is that the product of two perpendicular vectors is antisymmetric, and that any unit vector has a unit square. A number of specific algebraic structures can be represented with geometric algebras. For example, one can identify the algebra spanned by a scalar and unit bivector, such as
$$\text{span} \left\{ { 1, \mathbf{e}_1 \mathbf{e}_2 } \right\}$$
with complex numbers. This is because any unit bivector of this form (in a Euclidean space) squares to unity
$$\begin{aligned}(\mathbf{e}_1 \mathbf{e}_2)^2 &= (\mathbf{e}_1 \mathbf{e}_2)(\mathbf{e}_1 \mathbf{e}_2) \\ &= \mathbf{e}_1 (\mathbf{e}_2 \mathbf{e}_1) \mathbf{e}_2 \\ &= -\mathbf{e}_1 (\mathbf{e}_1 \mathbf{e}_2) \mathbf{e}_2 \\ &= -(\mathbf{e}_1 \mathbf{e}_1) (\mathbf{e}_2 \mathbf{e}_2) \\ &= - (1)(1) \\ &= -1.\end{aligned}$$
Other examples of algebraic structures that can have GA representations include quaternions, the Pauli (spin) algebra of quantum mechanics, and the Dirac algebra from QED.
The GA representation of dual vectors is through multiplication by a (unit) pseudoscalar (an ordered product of all the unit vectors of the space), often denoted $ I $, for the vector space. For example, negative multiplication by the three dimensional pseudoscalar has the duality property illustrated in the exterior algebra duality example
$$\begin{aligned}-I \mathbf{e}_1&=-\mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3 \mathbf{e}_1 \\ &=\mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_1 \mathbf{e}_3 \\ &=\mathbf{e}_2 \mathbf{e}_3,\end{aligned}$$
$$\begin{aligned}-I\mathbf{e}_2 \mathbf{e}_3&=- \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3 \mathbf{e}_2 \mathbf{e}_3 \\ &=\mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_2 \mathbf{e}_3 \mathbf{e}_3 \\ &=\mathbf{e}_1.\end{aligned}$$
A number of fundamental geometric operations, such as projection, rotation, and reflection can all be represented using GA multivector product operations.
Clifford algebra
In GA the basis vectors for the space are typically real valued vectors. Complex valued vectors have uses in GA (i.e. frequency domain representation of vectors in electrodynamics), but the underlying basis for the vector space is still real valued (i.e. $\text{span} \left\{ { \mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3 } \right\}$ ).
Clifford algebras provide a further generalization, allowing those basis vectors to reside in a complex vector space, with suitable modifications of the vector product rules.
Multilinear
All of these algebras are linear algebras. For example, in an exterior algebra
$$\mathbf{a} \wedge (\alpha \mathbf{b} + \beta \mathbf{c}) = \alpha \mathbf{a} \wedge \mathbf{b} + \beta \mathbf{a} \wedge \mathbf{c},$$ $$(\alpha \mathbf{b} + \beta \mathbf{c})\wedge \mathbf{a} = \alpha \mathbf{b} \wedge \mathbf{a} + \beta \mathbf{c} \wedge \mathbf{a},$$
or in GA
$$\mathbf{a} \left( { \alpha \mathbf{b} + \beta \mathbf{c} \mathbf{d} } \right)= \alpha \mathbf{a} \mathbf{b} + \beta \mathbf{a} \mathbf{c} \mathbf{d}.$$ $$\left( { \alpha \mathbf{b} + \beta \mathbf{c} \mathbf{d} } \right) \mathbf{a} = \alpha \mathbf{b} \mathbf{a} + \beta \mathbf{c} \mathbf{d}\mathbf{a}.$$