There is a definition of the geometric product that applies to general multivectors in any Clifford algebra. It follows directly from the definition of the Clifford algebra. To define a Clifford algebra you need a vector space $V$ and a symmetric bilinear form $B(u,v)$ defined for any $u,v\in V$. The Clifford algebra is the quotient of the tensor algebra of $V$ with respect to the two-sided ideal generated by all elements of the form $u\otimes v+v\otimes u -2B(u,v)$ where $u,v\in V$. The geometric product is the product in the quotient algebra. It is standard and you can find the definition of that in any textbook on abstract algebra. Basically, the geometric product is the product in the tensor algebra of $V$ modulo the ideal.
AN EXAMPLE:
To illustrate, consider $\mathbb R^2$ and the bilinear form defined by $B(e_1,e_1)=1$, $B(e_2,e_2)=1$, $B(e_1,e_2)=0$, where $e_1=(1,0)$ and $e_2=(0,1)$. The two-sided ideal generated by $u\otimes v+v\otimes u -2B(u,v)$ is infinite dimensional as the tensor algebra itself. It contain the following elements among others:
$e_1\otimes e_1-1,\quad e_2\otimes e_2-1,\quad \text{and}\quad e_1\otimes e_2+e_2\otimes e_1$.
This can be used to compute the following products:
$e_1e_1 = e_1\otimes e_1=e_1\otimes e_1 -(e_1\otimes e_1-1)=1$,
$e_2e_2 = e_2\otimes e_2=e_2\otimes e_2 -(e_2\otimes e_2-1)=1$,
$e_1e_2=e_1\otimes e_2= \tfrac{1}{2}(e_1\otimes e_2- e_2\otimes e_1)+\tfrac{1}{2}(e_1\otimes e_2+ e_2\otimes e_1)=\tfrac{1}{2}(e_1\otimes e_2- e_2\otimes e_1)$.
In short, $e_1^2=1$, $e_2^2=1$, and $e_1e_2=-e_2e_1$.
Even though the tensor algebra is infinite-dimensional, the quotient algebra is finite-dimensional. See what happens if you try to get to grade 3. For instance, consider this product
$e_1(e_1\wedge e_2)$ where $e_1\wedge e_2=\tfrac{1}{2}(e_1\otimes e_2- e_2\otimes e_1)$.
It is again a straightforward application of the tensor product modulo the ideal:
$e_1(e_1\wedge e_2)=\tfrac{1}{2}(e_1\otimes e_1\otimes e_2 - e_1 \otimes e_2\otimes e_1)$
but $e_1\otimes e_1\otimes e_2=(e_1\otimes e_1-1)\otimes e_2 +e_2=e_2$ and
$e_1\otimes e_2\otimes e_1=(e_1\otimes e_2+e_2\otimes e_1)\otimes e_1 -e_2\otimes e_1\otimes e_1=$
$=e_2\otimes e_1\otimes e_1=e_2+e_2\otimes(e_1\otimes e_1-1)=e_2$.
So, we get $e_1(e_1\wedge e_2)=\tfrac{1}{2}(e_2+e_2)=e_2$, that is we are back to grade 1. Since the quotient algebra is finite-dimensional, every element can be expressed in term of the basis, which consists of $1, e_1, e_2, e_1e_2$ in the example we are considering. So, every multivector $A$ can be expressed as follows:
$A=s+xe_1+ye_2+pe_1e_2$.
If you have two such multivectors you can compute the product simply by using associativity, distributivity, and the properties we have derived above: $e_1^2=1, e_2^2=1, e_1e_2=-e_2e_1$.
You can easily repeat this exercise for other dimensions and for different bilinear forms.
To come back to the definition of the geometric product, here is how you can understand its significance. In geometry, you are dealing with certain geometric structures. For instance, you might want to find a line passing through two points, or you might want to find a point at the intersection of two lines. These kinds of problems can be dealt with efficiently by applying the exterior structure. You also might want to find, say, a line which passes through a given point and is perpendicular to another line. This kind of problem is related to the orthogonal structure. The tensor product is too general. By using the quotient algebra you are effectively eliminating any part of the tensor product which is not related to exterior or orthogonal structure. What is left has a clear geometric significance. In a way, the geometric product does a lot of work for you behind the curtains, so that you can concentrate on the relevant geometric structures. The expression $uv=u\cdot v+u\wedge v$ is not really the definition of the product. It is just a property that the geometric product of two vectors has.
Alan Macdonald does not use the definition of the geometric product I described above because he does not presume his readers are familiar with the tensor algebra, ideals, or quotients. Instead, he wants to concentrate on applications, geometric properties of the algebra, and on computation. If you are not satisfied with his approach, perhaps you need to read another book. This one
Clifford Algebras and Lie Theory, by Meinrenken
is recent and it uses the same definition I used. There are other equivalent ways to define Clifford algebras. If you are interested, check out these books as well
Quadratic Mappings and Clifford Algebras, by Helmstetter and Micali,
Clifford Algebras: An introduction, by Garling.
Perhaps, after trying to read these books you will appreciate Alan's book more.
Clifford algebra is a well-established part of standard mathematics. It is used in differential geometry and Clifford Analysis, not to mention various applications in physics. No one questions its validity. People who refer to it as Geometric algebra simply want to help promote it in engineering, applied mathematics, and physics. The focus is on applications rather than mathematical rigour. As Alan has pointed out, you don't need to know how the algebra is defined in general in order to use it. You can always compute the product in the basis. It gets tedious to do it by hand as the dimension of the underlying vector space increases, but it can be implemented on a computer quite easily.
Can linear duals (i.e. linear functionals) be represented using the geometric algebra formalism?
Yes and no.
In geometric algebra, dual vectors can be computed through Hodge duality. Let $\{u_1, u_2, \ldots, u_n\}$ be an orthogonal basis set for an $n$-dimensional vector space. Let $I$ be their geometric product, which is grade-$n$ due to orthogonality. Then $u^i = I u_i$ is, within a scale factor, a unique vector such that $u^i \cdot u_j = 0$ for $i\neq j$ but $u^i \cdot u_i \neq 0$ for nonzero $u_i$. Do a little more work normalizing $I$, and you would get the correct vector that corresponds to the element of the dual space that is dual to $u_i$.
So, geometric algebra lets you compute those vectors, but linear functionals themselves--as functions--have no place in the algebra. The algebra has elements and functions of elements, but I would hesitate to say that linear functionals are elements of the algebra.
That said, you can also construct a geometric algebra over the dual space.
Which types of tensors admit a representation using geometric algebra?
Any that you can suggest an isomorphism between tensors of that form and the algebra itself.
...yes, I know that borders on a non-answer, but let me give an example.
For instance, the linear map $T(a) = B \cdot a$ for vector $a$ and bivector $B$ is a tensor, moreover a linear operator. It's clear that this tensor directly, and uniquely, corresponds to $B$. $B$ entirely determines the action of the tensor.
Contrast this against the form of a general linear operator, $T(a) = \sum_i^n (a \cdot u^i) v_i$ for a basis set $u_i$ and some other set $v_i$, and you see that there is no such direct correspondence in the general case.
However, do objects "sufficiently isomorphic" to differential forms admit a representation in geometric algebra?
That's an easy one. You can write a $k$-form as a $k$-covector field. Any differential form can be written in terms of the algebra--perhaps with exception of "vector-valued forms" and other such things, but these are no more complicated in geometric calculus than they are in traditional differential forms. Doran and Lasenby or Hestene and Sobczyk both have extensive chapters on calculus with GA.
Can vector fields=derivations be represented using geometric algebra?
No, with a caveat: the geometric algebra is merely an algebra. It does not care what the underlying vector space is that it is built upon. It does not care whether vector fields are actually derivations.
So, GA can't represent vector fields being derivations because such a consideration is wholly separate from it.
In other words, if you want to take the wedge product of two vector fields and interpret that as meaning something in terms of derivations, that's on you. All GA says is that, if there is a meaningful metric you can impose on the vectors in this vector space, you can build a geometric algebra on it.
Do tangent/cotangent spaces/bundles admit a representation using geometric algebra?
The geometric algebra and its calculus can represent vector fields, but I'm not aware of any construction that allows it to invert things and recover the tangent bundle.
However, if I had to guess, I would say such a thing is probably the inverse of the unit pseudoscalar function on a manifold. Such a function is from $M$ to a grade-$n$ multivector, where $n$ is the dimension of $M$. Inverting this map would yield a map from a pseudoscalar to the manifold, which seems almost exactly like the tangent bundle. Such a function, however, would rely on the pseudoscalar admitting an inverse, which it might not do globally, and I can only imagine this making sense in terms of an embedding.
So where do we stand?
In my opinion, geometric algebra and calculus is more than capable of serving as a full foundation for someone studying differential geometry. Even if you throw away the notion of Hestenes' vector manifolds, you can still use geometric algebra and calculus to compute relations between vector fields or between differential forms. You can translate any differential forms expression into geometric algebra, and general tensors that don't correspond to GA elements can still be represented as linear functions on those elements instead.
There's already been considerable work on the relationship between GA/GC and differential geometry. I recommend Doran and Lasenby for this; they have an in-depth chapter building on Hestenes' vector manifold theory, in which they develop and expand on much of the calculus of GA. But moreover, they also have a chapter on general relativity, in which they develop an alternative to curved spaces for differential geometry, preferring "gauge fields" on flat manifolds instead. This method is superficially very similar to moving frames, and they use it to generate GA equivalents of the Cartan structure equations.
Best Answer
Take inner product with $\boldsymbol b$ on both sides and solve for $\boldsymbol x \cdot \boldsymbol b$. Replace $\boldsymbol x \cdot \boldsymbol b$ in the original equation by the solution just found. Solve for $\boldsymbol x$.