Differentiation in Geometric Calculus, Computing Vector Derivatives of Multivector-Valued Functions

clifford-algebrasderivativesexterior-algebrageometric-algebras

I haven't found an explicit formula and way to compute vector derivatives in geometric calculus. For instance, let $V \simeq \mathbb{R}^3$ with the usual orthonormal basis $\{\textbf{e}_i\}_{i=1}^3$ and $C \ell(V)$ its universal Clifford algebra. Consider the multivector-valued function of a vector, that is $F: P_1(C\ell(V)) \to C \ell (V)$ (where $P_1$ is the projection operator), defined as
$$F(x) = x(\textbf{e}_1 – \textbf{e}_2) + \textbf{e}_1\textbf{e}_2 \textbf{e}_3$$
where $x \in P_1(C \ell (V))$. Consider that $x = \textbf{e}_1$, then
$$F(\textbf{e}_1) = \textbf{e}_1(\textbf{e}_1 – \textbf{e}_2) + \textbf{e}_1\textbf{e}_2 \textbf{e}_3 = {\textbf{e}_1}^2 – \textbf{e}_1 \textbf{e}_2 + \textbf{e}_1\textbf{e}_2\textbf{e}_3$$
$$F(\textbf{e}_1) = 1 – \textbf{e}_1 \textbf{e}_2 + \textbf{e}_1\textbf{e}_2\textbf{e}_3$$

What would it mean to take the vector derivative $\partial_x$ of the function $F$? My line of reasoning is
$$\partial_x F(x) = \partial_x (x \textbf{e}_1) – \partial_x (x\textbf{e}_2) + \partial_x (\textbf{e}_1\textbf{e}_2\textbf{e}_3)$$
and, using $x=\textbf{e}_1$ for instance, we would have
$$\partial_{\textbf{e}_1} F = \partial_{\textbf{e}_1}({\textbf{e}_1}^2) – \partial_{\textbf{e}_1}(\textbf{e}_1)\textbf{e}_2 + \partial_{\textbf{e}_1}(\textbf{e}_1)\textbf{e}_2\textbf{e}_3$$
where $\partial_{\textbf{e}_1}({\textbf{e}_1}^2) = 2\textbf{e}_1$, but ${\textbf{e}_1}^2 = 1$ and $\partial_{\textbf{e}_1}(1) = 0$, this reasoning leads to an ambiguity. In the end
$$\partial_{\textbf{e}_1} F = 2\textbf{e}_1 -\textbf{e}_2 + \textbf{e}_2\textbf{e}_3$$
or
$$\partial_{\textbf{e}_1} F = 0 -\textbf{e}_2 + \textbf{e}_2\textbf{e}_3$$

This most likely isn't correct, i'm having a hard time undestanding how to compute those derivatives in the Clifford algebra. If the question is answered, I would also like to understand how to compute an $n$-vector derivative and even a multivector derivative.

In Alan Mcdonald's book, Vector and Geometric Calculus, he treats $\mathbb{R}^m$ as a vector space and simply defines the vector derivative as
$$\partial_{h} F = h^i \frac{\partial F}{\partial x^i} $$
where $h = h^i\textbf{e}_i$ and $x^i$ are coordinates on $\mathbb{R}^m$. But this makes any function $F$ be implicitly defined on $\mathbb{R}^m$ and not general subspaces of $C \ell(\mathbb{R}^m)$.

In David Hestenes and Garret Sobczyk's book, Clifford Algebra to Geometric Calculus A Unified Language for Mathematics and Physics, they define the vector derivative using the directional directional derivative as
$$a \cdot \partial_X F(x) = \left.\frac{\partial}{\partial \tau} F(x+a\tau ) \right\vert_{\tau =0}
= \lim_{\tau \to 0} \frac{F(x+a\tau) – F(x)}{\tau}$$

and, duo the generality desired, they never go on to give $\partial_x$ an explicit formula, since this would require a choice of basis. They do derive extensively its properties and its "algebra", and derive that
$$\partial_x F = \partial_x \cdot F + \partial_x \wedge F$$

In the Wikipedia article on geometric calculus (https://en.wikipedia.org/wiki/Geometric_calculus), the derivative
$$\partial_{\textbf{e}_i} = \partial_i$$
is simply stated as the derivative in the direction of $\textbf{e}_i$, does this imply to calculate $\partial/\partial x^i$ just like Alan does in his book?

If this is indeed the case, that is, the association of points in $\mathbb{R}^n$ to vectors in $P_1(C\ell(V))$ is "essential" to compute those derivatives, how would this theory come when considering manifolds as the base space, since this impossibilitates the use of points as vectors.

So, a recap. I haven't been able to understand how to compute vector derivatives of multivector-valued functions on $P_1(C\ell(V))$. From all I could see, this operation depends on the base space $\mathbb{R}^n \simeq V$ to allow for those calculations, but this seems to restrict those functions to just $\mathbb{R}^n$ and not really to vectors, $p$-vectors and multivectors.

Best Answer

As you noted, there is some notational inconsistency between different authors on this subject. You mentioned [1] who writes the directional derivative as $$ \partial_\mathbf{h} F(\mathbf{x}) = \lim_{t\rightarrow 0} \frac{F(\mathbf{x} + t \mathbf{h}) - F(\mathbf{x})}{t},$$ where he makes the identification $ \partial_\mathbf{h} F(\mathbf{x}) = \left( { \mathbf{h} \cdot \boldsymbol{\nabla} } \right) F(\mathbf{x}) $. Similarly [2] writes $$ A * \partial_X F(X) = {\left.{{\frac{dF(X + t A)}{dt}}}\right\vert}_{{t = 0}},$$ where $ A * B = \left\langle{{ A B }}\right\rangle $ is a scalar grade operator. In the first case, the domain of the function $ F $ was vectors, whereas the second construction is an explicit multivector formulation. Should the domain of $ F $ be restricted to vectors, we may make the identification $ \partial_X = \boldsymbol{\nabla} = \sum e^i \partial_i $, however we are interested in the form of the derivative operator for multivectors. To see how that works, let's expand out the direction derivative in coordinates.

The first step is a coordinate expansion of our multivector $ X $. We may write $$ X = \sum_{i < \cdots < j} \left( { X * \left( { e_i \wedge \cdots \wedge e_j } \right) } \right) \left( { e_i \wedge \cdots \wedge e_j } \right)^{-1},$$ or $$ X = \sum_{i < \cdots < j} \left( { X * \left( { e^i \wedge \cdots \wedge e^j } \right) } \right) \left( { e^i \wedge \cdots \wedge e^j } \right)^{-1}.$$ In either case, the basis $ \left\{ { e_1, \cdots, e_m } \right\} $, need not be orthonormal, not Euclidean. In the latter case, we've written the components of the multivector in terms of the reciprocal frame satisfying $ e^i \cdot e_j = {\delta^i}_j $, where $ e^i \in \text{span} \left\{ { e_1, \cdots, e_m } \right\} $. Both of these expansions are effectively coordinate expansions. We may make that more explicit, by writing $$\begin{aligned} X^{i \cdots j} &= X * \left( { e^j \wedge \cdots \wedge e^i } \right) \\ X_{i \cdots j} &= X * \left( { e_j \wedge \cdots \wedge e_i } \right),\end{aligned}$$ so $$ X = \sum_{i < \cdots < j} X^{i \cdots j} \left( { e_i \wedge \cdots \wedge e_j } \right) = \sum_{i < \cdots < j} X_{i \cdots j} \left( { e^i \wedge \cdots \wedge e^j } \right).$$

To make things more concrete, assume that the domain of $ F $ is a two dimensional geometric algebra, where we may represent a multivector with coordinates $$ X = x^0 + x^1 e_1 + x^2 e_2 + x^{12} e_{12},$$ where $ e_{12} = e_1 \wedge e_2 $ is a convenient shorthand. We can now expand the directional derivative in coordinates $$\begin{aligned} {\left.{{\frac{dF(X + t A)}{dt}}}\right\vert}_{{t = 0}} &= {\left.{{ \frac{\partial {F}}{\partial {(x^0 + t a^0)}} \frac{\partial {(x^0 + t a^0)}}{\partial {t}} }}\right\vert}_{{t = 0}} + {\left.{{ \frac{\partial {F}}{\partial {(x^1 + t a^1)}} \frac{\partial {(x^1 + t a^1)}}{\partial {t}} }}\right\vert}_{{t = 0}} \\ &\quad + {\left.{{ \frac{\partial {F}}{\partial {(x^2 + t a^2)}} \frac{\partial {(x^2 + t a^2)}}{\partial {t}} }}\right\vert}_{{t = 0}} + {\left.{{ \frac{\partial {F}}{\partial {(x^{12} + t a^{12})}} \frac{\partial {(x^{12} + t a^{12})}}{\partial {t}} }}\right\vert}_{{t = 0}} \\ &= a^0 \frac{\partial {F}}{\partial {x^0}} + a^1 \frac{\partial {F}}{\partial {x^1}} + a^2 \frac{\partial {F}}{\partial {x^2}} + a^{12} \frac{\partial {F}}{\partial {x^{12}}}.\end{aligned}$$ We may express the $ A $ dependence above without coordinates by introducing a number of factors of unity $$\begin{aligned} {\left.{{\frac{dF(X + t A)}{dt}}}\right\vert}_{{t = 0}} &= \left( {a^0 1} \right) 1 \frac{\partial {F}}{\partial {x^0}} + \left( { a^1 e_1 } \right) e^1 \frac{\partial {F}}{\partial {x^1}} + \left( { a^2 e_2 } \right) e^2 \frac{\partial {F}}{\partial {x^2}} + \left( { a^{12} e_{12} } \right) e^{21} \frac{\partial {F}}{\partial {x^{12}}} \\ &= \left( { \left( {a^0 1} \right) 1 \frac{\partial {}}{\partial {x^0}} + \left( { a^1 e_1 } \right) e^1 \frac{\partial {}}{\partial {x^1}} + \left( { a^2 e_2 } \right) e^2 \frac{\partial {}}{\partial {x^2}} + \left( { a^{12} e_{12} } \right) e^{21} \frac{\partial {}}{\partial {x^{12}}} } \right) F \\ &= A * \left( { \frac{\partial {}}{\partial {x^0}} + e^1 \frac{\partial {}}{\partial {x^1}} + e^2 \frac{\partial {}}{\partial {x^2}} + e^{21} \frac{\partial {}}{\partial {x^{12}}} } \right) F.\end{aligned}$$ Now we see the form of the multivector derivative, which is $$ \partial_X = \frac{\partial {}}{\partial {x^0}} + e^1 \frac{\partial {}}{\partial {x^1}} + e^2 \frac{\partial {}}{\partial {x^2}} + e^{21} \frac{\partial {}}{\partial {x^{12}}},$$ or more generally $$ \partial_X = \sum_{i < \cdots < j} e^{j \cdots i} \frac{\partial {}}{\partial {x^{i \cdots j}}}.$$

Let's apply this to your function $$\begin{aligned} F(X) &= X \left( { e_1 - e_2 } \right) + e_1 e_2 e_3 \\ &= \left( { x^0 + x^1 e_1 + x^2 e_2 + x^3 e_3 + x^{12} e_{12} + x^{23} e_{23} + x^{13} e_{13} + x^{123} e_{123} } \right) \left( { e_1 - e_2 } \right) + e_1 e_2 e_3.\end{aligned}$$ Our multivector gradient is $$\begin{aligned} \partial_X F(X) &= \left( { 1 + e^1 e_1 + e^2 e_2 + e^3 e_3 + e^{21} e_{12} + e^{32} e_{23} + e^{31} e_{13} + e^{321} e_{123} } \right) \left( { e_1 - e_2 } \right) \\ &= 2^3 \left( { e_1 - e_2 } \right).\end{aligned}$$ We have had to resort to coordinates to compute the multivector gradient, but in the end, we do end up (at least in this case) with a coordinate free result.

References

[1] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012.

[2] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

Related Question