Generally speaking if you have a tensor $T$ on a manifold, and if you have a collection (of usually coordinate) vector fields $e_1, \cdots, e_n$ the "index notation" for $T$ is (lets assume for a moment $T$ is bilinear):
$$T_{ij} = T(e_i,e_j)$$
meaning $T_{ij}$ is a real-valued function for all $i$, and $j$. $T_{ij}$ is defined wherever the vector fields $\{ e_i : i = 1,2,\cdots n\}$ are defined. On a manifold with a metric (meaning an inner product on every tangent space), it is typical to define
$$g_{ij} = \langle e_i, e_j \rangle$$
where $\langle \cdot, \cdot \rangle$ is the inner product on the tangent spaces.
If the tensor takes something other than two vectors as input, for example the Riemann curvature tensor is sometimes thought of as a bilinear function from the tangent space to the space of skew-adjoint linear transformations of that tangent space, i.e. at every point $p$ of the manifold it is bilinear $T_p N \oplus T_p N \to Hom(T_p N, T_p N)$ taking values in the skew-adjoint maps (with respect to the inner product). So given $e_i, e_j \in T_p N$, $R(e_i,e_j)$ is a linear functional on the tangent space, so you could express $R(e_i,e_j)(e_k)$ as a linear combination of vectors in the dual space $T^*_p N$. The standard basis vectors of the dual space (corresponding to the collection $\{e_i\}$) is typically denoted $e_1^*, \cdots, e_n^*$. So you write $R(e_i,e_j)(e_k) = \sum_l R^l_{ijk}e^*_l$, and call $R^l_{ijk}$ the Riemann tensor "in coordinates".
In case any of this is unfamiliar, $e^*_j(e_i) = 1$ only when $i=j$ and $e^*_j(e_i) = 0$ otherwise. Or "in coordinates" $e^*_j(e_i) = \delta_{ij}$.
I think many intro general relativity textbooks explain this fairly well nowadays. When I was an undergraduate I liked:
- A First Course in. General Relativity. Second Edition. Bernard F. Schutz.
Can linear duals (i.e. linear functionals) be represented using the geometric algebra formalism?
Yes and no.
In geometric algebra, dual vectors can be computed through Hodge duality. Let $\{u_1, u_2, \ldots, u_n\}$ be an orthogonal basis set for an $n$-dimensional vector space. Let $I$ be their geometric product, which is grade-$n$ due to orthogonality. Then $u^i = I u_i$ is, within a scale factor, a unique vector such that $u^i \cdot u_j = 0$ for $i\neq j$ but $u^i \cdot u_i \neq 0$ for nonzero $u_i$. Do a little more work normalizing $I$, and you would get the correct vector that corresponds to the element of the dual space that is dual to $u_i$.
So, geometric algebra lets you compute those vectors, but linear functionals themselves--as functions--have no place in the algebra. The algebra has elements and functions of elements, but I would hesitate to say that linear functionals are elements of the algebra.
That said, you can also construct a geometric algebra over the dual space.
Which types of tensors admit a representation using geometric algebra?
Any that you can suggest an isomorphism between tensors of that form and the algebra itself.
...yes, I know that borders on a non-answer, but let me give an example.
For instance, the linear map $T(a) = B \cdot a$ for vector $a$ and bivector $B$ is a tensor, moreover a linear operator. It's clear that this tensor directly, and uniquely, corresponds to $B$. $B$ entirely determines the action of the tensor.
Contrast this against the form of a general linear operator, $T(a) = \sum_i^n (a \cdot u^i) v_i$ for a basis set $u_i$ and some other set $v_i$, and you see that there is no such direct correspondence in the general case.
However, do objects "sufficiently isomorphic" to differential forms admit a representation in geometric algebra?
That's an easy one. You can write a $k$-form as a $k$-covector field. Any differential form can be written in terms of the algebra--perhaps with exception of "vector-valued forms" and other such things, but these are no more complicated in geometric calculus than they are in traditional differential forms. Doran and Lasenby or Hestene and Sobczyk both have extensive chapters on calculus with GA.
Can vector fields=derivations be represented using geometric algebra?
No, with a caveat: the geometric algebra is merely an algebra. It does not care what the underlying vector space is that it is built upon. It does not care whether vector fields are actually derivations.
So, GA can't represent vector fields being derivations because such a consideration is wholly separate from it.
In other words, if you want to take the wedge product of two vector fields and interpret that as meaning something in terms of derivations, that's on you. All GA says is that, if there is a meaningful metric you can impose on the vectors in this vector space, you can build a geometric algebra on it.
Do tangent/cotangent spaces/bundles admit a representation using geometric algebra?
The geometric algebra and its calculus can represent vector fields, but I'm not aware of any construction that allows it to invert things and recover the tangent bundle.
However, if I had to guess, I would say such a thing is probably the inverse of the unit pseudoscalar function on a manifold. Such a function is from $M$ to a grade-$n$ multivector, where $n$ is the dimension of $M$. Inverting this map would yield a map from a pseudoscalar to the manifold, which seems almost exactly like the tangent bundle. Such a function, however, would rely on the pseudoscalar admitting an inverse, which it might not do globally, and I can only imagine this making sense in terms of an embedding.
So where do we stand?
In my opinion, geometric algebra and calculus is more than capable of serving as a full foundation for someone studying differential geometry. Even if you throw away the notion of Hestenes' vector manifolds, you can still use geometric algebra and calculus to compute relations between vector fields or between differential forms. You can translate any differential forms expression into geometric algebra, and general tensors that don't correspond to GA elements can still be represented as linear functions on those elements instead.
There's already been considerable work on the relationship between GA/GC and differential geometry. I recommend Doran and Lasenby for this; they have an in-depth chapter building on Hestenes' vector manifold theory, in which they develop and expand on much of the calculus of GA. But moreover, they also have a chapter on general relativity, in which they develop an alternative to curved spaces for differential geometry, preferring "gauge fields" on flat manifolds instead. This method is superficially very similar to moving frames, and they use it to generate GA equivalents of the Cartan structure equations.
Best Answer
I just want to point out that GA can be used to make covariant multivectors (or differential forms) on $\mathbb R^n$ without forcing a metric onto it. In other words, the distinction between vectors and covectors (or between $\mathbb R^n$ and its dual) can be maintained.
This is done with a pseudo-Euclidean space $\mathbb R^{n,n}$.
Take an orthonormal set of spacelike vectors $\{\sigma_i\}$ (which square to ${^+}1$) and timelike vectors $\{\tau_i\}$ (which square to ${^-}1$). Define null vectors
$$\Big\{\nu_i=\frac{\sigma_i+\tau_i}{\sqrt2}\Big\}$$
$$\Big\{\mu_i=\frac{\sigma_i-\tau_i}{\sqrt2}\Big\};$$
they're null because
$${\nu_i}^2=\frac{{\sigma_i}^2+2\sigma_i\cdot\tau_i+{\tau_i}^2}{2}=\frac{(1)+2(0)+({^-}1)}{2}=0$$
$${\mu_i}^2=\frac{{\sigma_i}^2-2\sigma_i\cdot\tau_i+{\tau_i}^2}{2}=\frac{(1)-2(0)+({^-}1)}{2}=0.$$
More generally,
$$\nu_i\cdot\nu_j=\frac{\sigma_i\cdot\sigma_j+\sigma_i\cdot\tau_j+\tau_i\cdot\sigma_j+\tau_i\cdot\tau_j}{2}=\frac{(\delta_{i,j})+0+0+({^-}\delta_{i,j})}{2}=0$$
and
$$\mu_i\cdot\mu_j=0.$$
So the spaces spanned by $\{\nu_i\}$ or $\{\mu_i\}$ each have degenerate quadratic forms. But the dot product between them is non-degenerate:
$$\nu_i\cdot\mu_i=\frac{\sigma_i\cdot\sigma_i-\sigma_i\cdot\tau_i+\tau_i\cdot\sigma_i-\tau_i\cdot\tau_i}{2}=\frac{(1)-0+0-({^-}1)}{2}=1$$
$$\nu_i\cdot\mu_j=\frac{\sigma_i\cdot\sigma_j-\sigma_i\cdot\tau_j+\tau_i\cdot\sigma_j-\tau_i\cdot\tau_j}{2}=\frac{(\delta_{i,j})-0+0-({^-}\delta_{i,j})}{2}=\delta_{i,j}$$
Of course, we could have just started with the definition that $\mu_i\cdot\nu_j=\delta_{i,j}=\nu_i\cdot\mu_j$, and $\nu_i\cdot\nu_j=0=\mu_i\cdot\mu_j$, instead of going through "spacetime".
The space $V$ will be generated by $\{\nu_i\}$, and its dual $V^*$ by $\{\mu_i=\nu^i\}$. You can take the dot product of something in $V^*$ with something in $V$, which will be a differential 1-form. You can make contravariant multivectors from wedge products of things in $V$, and covariant multivectors from wedge products of things in $V^*$.
You can also take the wedge product of something in $V^*$ with something in $V$.
$$\mu_i\wedge\nu_i=\frac{\sigma_i\wedge\sigma_i+\sigma_i\wedge\tau_i-\tau_i\wedge\sigma_i-\tau_i\wedge\tau_i}{2}=\frac{0+\sigma_i\tau_i-\tau_i\sigma_i-0}{2}=\sigma_i\wedge\tau_i$$
$$\mu_i\wedge\nu_j=\frac{\sigma_i\sigma_j+\sigma_i\tau_j-\tau_i\sigma_j-\tau_i\tau_j}{2},\quad i\neq j$$
What does this mean? ...I suppose it could be a matrix (a mixed variance tensor)!
A matrix can be defined as a bivector:
$$M = \sum_{i,j} M^i\!_j\;\nu_i\wedge\mu_j = \sum_{i,j} M^i\!_j\;\nu_i\wedge\nu^j$$
where each $M^i_j$ is a scalar. Note that $(\nu_i\wedge\mu_j)\neq{^-}(\nu_j\wedge\mu_i)$, so $M$ is not necessarily antisymmetric. The corresponding linear function $f:V\to V$ is (with $\cdot$ the "fat dot product")
$$f(x) = M\cdot x = \frac{Mx-xM}{2}$$
$$= \sum_{i,j} M^i_j(\nu_i\wedge\mu_j)\cdot\sum_k x^k\nu_k$$
$$= \sum_{i,j,k} M^i_jx^k\frac{\nu_i\mu_j-\mu_j\nu_i}{2}\cdot\nu_k$$
$$= \sum_{i,j,k} M^i_jx^k\frac{(\nu_i\mu_j)\nu_k-\nu_k(\nu_i\mu_j)-(\mu_j\nu_i)\nu_k+\nu_k(\mu_j\nu_i)}{4}$$
(the $\nu$'s anticommute because their dot product is zero:)
$$= \sum_{i,j,k} M^i_jx^k\frac{\nu_i\mu_j\nu_k+\nu_i\nu_k\mu_j+\mu_j\nu_k\nu_i+\nu_k\mu_j\nu_i}{4}$$
$$= \sum_{i,j,k} M^i_jx^k\frac{\nu_i(\mu_j\nu_k+\nu_k\mu_j)+(\mu_j\nu_k+\nu_k\mu_j)\nu_i}{4}$$
$$= \sum_{i,j,k} M^i_jx^k\frac{\nu_i(\mu_j\cdot\nu_k)+(\mu_j\cdot\nu_k)\nu_i}{2}$$
$$= \sum_{i,j,k} M^i_jx^k\frac{\nu_i(\delta_{j,k})+(\delta_{j,k})\nu_i}{2}$$
$$= \sum_{i,j,k} M^i_jx^k\big(\delta_{j,k}\nu_i\big)$$
$$= \sum_{i,j} M^i_jx^j\nu_i$$
This agrees with the conventional definition of matrix multiplication.
In fact, it even works for non-square matrices; the above calculations work the same if the $\nu_i$'s on the left in $M$ are basis vectors for a different space. A bonus is that it also works for a non-degenerate quadratic form; the calculations don't rely on ${\mu_i}^2=0$, nor ${\nu_i}^2=0$, but only on $\nu_i$ being orthogonal to $\nu_k$, and $\mu_j$ being reciprocal to $\nu_k$. So you could instead have $\mu_j$ (the right factors in $M$) be in the same space as $\nu_k$ (the generators of $x$), and $\nu_i$ (the left factors in $M$) in a different space. A downside is that it won't map a non-degenerate space to itself.
I admit that this is worse than the standard matrix algebra; the dot product is not invertible, nor associative. Still, it's good to have this connection between the different algebras. And it's interesting to think of a matrix as a bivector that "rotates" a vector through the dual space and back to a different point in the original space (or a new space).
Speaking of matrix transformations, I should discuss the underlying principle for "contra/co variance": that the basis vectors may vary.
We want to be able to take any (invertible) linear transformation of the null space $V$, and expect that the opposite transformation applies to $V^*$. Arbitrary linear transformations of the external $\mathbb R^{n,n}$ will not preserve $V$; the transformed $\nu_i$ may not be null. It suffices to consider transformations that preserve the dot product on $\mathbb R^{n,n}$. One obvious type is the hyperbolic rotation
$$\sigma_1\mapsto\sigma_1\cosh\phi+\tau_1\sinh\phi={\sigma_1}'$$
$$\tau_1\mapsto\sigma_1\sinh\phi+\tau_1\cosh\phi={\tau_1}'$$
$$\sigma_2={\sigma_2}',\quad\sigma_3={\sigma_3}',\quad\cdots$$
$$\tau_2={\tau_2}',\quad\tau_3={\tau_3}',\quad\cdots$$
(or, more compactly, $x\mapsto\exp(-\sigma_1\tau_1\phi/2)x\exp(\sigma_1\tau_1\phi/2)$ ).
The induced transformation of the null vectors is
$${\nu_1}'=\frac{{\sigma_1}'+{\tau_1}'}{\sqrt2}=\exp(\phi)\nu_1$$
$${\mu_1}'=\frac{{\sigma_1}'-{\tau_1}'}{\sqrt2}=\exp(-\phi)\mu_1$$
$${\nu_2}'=\nu_2,\quad{\nu_3}'=\nu_3,\quad\cdots$$
$${\mu_2}'=\mu_2,\quad{\mu_3}'=\mu_3,\quad\cdots$$
The vector $\nu_1$ is multiplied by some positive number $e^\phi$, and the covector $\mu_1$ is divided by the same number. The dot product is still ${\mu_1}'\cdot{\nu_1}'=1$.
You can get a negative multiplier for $\nu_1$ simply by the inversion $\sigma_1\mapsto{^-}\sigma_1,\quad\tau_1\mapsto{^-}\tau_1$; this will also negate $\mu_1$. The result is that you can multiply $\nu_1$ by any non-zero Real number, and $\mu_1$ will be divided by the same number.
Of course, this only varies one basis vector in one direction. You could try to rotate the vectors, but a simple rotation in a $\sigma_i\sigma_j$ plane will mix $V$ and $V^*$ together. This problem is solved by an isoclinic rotation in $\sigma_i\sigma_j$ and $\tau_i\tau_j$, which causes the same rotation in $\nu_i\nu_j$ and $\mu_i\mu_j$ (while keeping them separate).
Combine these stretches, reflections, and rotations, and you can generate any invertible linear transformation on $V$, all while maintaining the degeneracy ${\nu_i}^2=0$ and the duality $\mu_i\cdot\nu_j=\delta_{i,j}$. This shows that $V$ and $V^*$ do have the correct "variance".
See also Hestenes' Tutorial, page 5 ("Quadratic forms vs contractions").