Geometric Algebra Grade Projection

clifford-algebrasgeometric-algebras

I was reading through the book Geometric Algebra for Physicist, and, although I find it pretty useful, there are steps that I find missing and haven't been able to find in any other GA books.

I was hopping someone could explain the steps to me:
The equation (3.126) states, $x$ is a vector and $A,B$ are bi-vectors:

$$ A ·(x\wedge(x·B))=\langle Ax(x·B) \rangle = \langle (A·x)xB \rangle=B·(x\wedge(x·A)) $$

In equation (4.154), $f_{1,2}$ are vectors and $A_r,B_r$ are $r$-blades:

$$ \alpha \langle A_r(B_r·f_1)\wedge f_2\rangle =\alpha \langle f_2·A_rB_rf_1\rangle$$

Could someone explain with detail the steps followed?

Best regards.

Best Answer

For the first decomposition, since $ x \cdot B $ is a vector, $ x \wedge ( x \cdot B ) $ must be a bivector (grade 2). The product of the bivector $ A $ with another bivector is a multivector with grades that may include 0, 2, 4, and the dot product of those two is the grade-0 selection. That is $$\begin{aligned} A \cdot \left( { x \wedge (x \cdot B) } \right) &= \left\langle{{ A \left( { x \wedge (x \cdot B) } \right) }}\right\rangle \\ &= \left\langle{{ A \left( { x (x \cdot B) - x \cdot (x \cdot B) } \right) }}\right\rangle.\end{aligned}$$ Here we used $ x y = x \cdot y + x \wedge y $, which leaves us a with a scalar factor $ x \cdot (x\cdot B) $, that has no contribution to the grade-0 selection, which leaves us with $$\begin{aligned} A \cdot \left( { x \wedge (x \cdot B) } \right) &= \left\langle{{ A x (x \cdot B) }}\right\rangle \\ &= \left\langle{{ (A \cdot x) (x \cdot B) }}\right\rangle \\ &= \left\langle{{ (A \cdot x) (x B - x \wedge B) }}\right\rangle \\ &= \left\langle{{ (A \cdot x) x B }}\right\rangle.\end{aligned}$$ Here we made use of the fact that $ A x = A \cdot x + A \wedge x $, a vector and trivector, of which only the vector component contributes to the scalar selection. Then we are free to rewrite $ x \cdot B $ in terms of $ x B $ and a trivector that also has no contritution to the scalar selection. The final result follows from the fact that reversion of a scalar leaves it unchanged, so $$\begin{aligned} \left\langle{{ (A \cdot x) x B }}\right\rangle &= \left\langle{{ B x (x \cdot A) }}\right\rangle \\ &= B \cdot \left( { x \wedge (x \cdot A) } \right),\end{aligned}$$ where we are able to use the very first step to write the scalar selection back in terms of dots and wedges.

The result of 4.154 probably requires similar logic.