There is a definition of the geometric product that applies to general multivectors in any Clifford algebra. It follows directly from the definition of the Clifford algebra. To define a Clifford algebra you need a vector space $V$ and a symmetric bilinear form $B(u,v)$ defined for any $u,v\in V$. The Clifford algebra is the quotient of the tensor algebra of $V$ with respect to the two-sided ideal generated by all elements of the form $u\otimes v+v\otimes u -2B(u,v)$ where $u,v\in V$. The geometric product is the product in the quotient algebra. It is standard and you can find the definition of that in any textbook on abstract algebra. Basically, the geometric product is the product in the tensor algebra of $V$ modulo the ideal.
AN EXAMPLE:
To illustrate, consider $\mathbb R^2$ and the bilinear form defined by $B(e_1,e_1)=1$, $B(e_2,e_2)=1$, $B(e_1,e_2)=0$, where $e_1=(1,0)$ and $e_2=(0,1)$. The two-sided ideal generated by $u\otimes v+v\otimes u -2B(u,v)$ is infinite dimensional as the tensor algebra itself. It contain the following elements among others:
$e_1\otimes e_1-1,\quad e_2\otimes e_2-1,\quad \text{and}\quad e_1\otimes e_2+e_2\otimes e_1$.
This can be used to compute the following products:
$e_1e_1 = e_1\otimes e_1=e_1\otimes e_1 -(e_1\otimes e_1-1)=1$,
$e_2e_2 = e_2\otimes e_2=e_2\otimes e_2 -(e_2\otimes e_2-1)=1$,
$e_1e_2=e_1\otimes e_2= \tfrac{1}{2}(e_1\otimes e_2- e_2\otimes e_1)+\tfrac{1}{2}(e_1\otimes e_2+ e_2\otimes e_1)=\tfrac{1}{2}(e_1\otimes e_2- e_2\otimes e_1)$.
In short, $e_1^2=1$, $e_2^2=1$, and $e_1e_2=-e_2e_1$.
Even though the tensor algebra is infinite-dimensional, the quotient algebra is finite-dimensional. See what happens if you try to get to grade 3. For instance, consider this product
$e_1(e_1\wedge e_2)$ where $e_1\wedge e_2=\tfrac{1}{2}(e_1\otimes e_2- e_2\otimes e_1)$.
It is again a straightforward application of the tensor product modulo the ideal:
$e_1(e_1\wedge e_2)=\tfrac{1}{2}(e_1\otimes e_1\otimes e_2 - e_1 \otimes e_2\otimes e_1)$
but $e_1\otimes e_1\otimes e_2=(e_1\otimes e_1-1)\otimes e_2 +e_2=e_2$ and
$e_1\otimes e_2\otimes e_1=(e_1\otimes e_2+e_2\otimes e_1)\otimes e_1 -e_2\otimes e_1\otimes e_1=$
$=e_2\otimes e_1\otimes e_1=e_2+e_2\otimes(e_1\otimes e_1-1)=e_2$.
So, we get $e_1(e_1\wedge e_2)=\tfrac{1}{2}(e_2+e_2)=e_2$, that is we are back to grade 1. Since the quotient algebra is finite-dimensional, every element can be expressed in term of the basis, which consists of $1, e_1, e_2, e_1e_2$ in the example we are considering. So, every multivector $A$ can be expressed as follows:
$A=s+xe_1+ye_2+pe_1e_2$.
If you have two such multivectors you can compute the product simply by using associativity, distributivity, and the properties we have derived above: $e_1^2=1, e_2^2=1, e_1e_2=-e_2e_1$.
You can easily repeat this exercise for other dimensions and for different bilinear forms.
To come back to the definition of the geometric product, here is how you can understand its significance. In geometry, you are dealing with certain geometric structures. For instance, you might want to find a line passing through two points, or you might want to find a point at the intersection of two lines. These kinds of problems can be dealt with efficiently by applying the exterior structure. You also might want to find, say, a line which passes through a given point and is perpendicular to another line. This kind of problem is related to the orthogonal structure. The tensor product is too general. By using the quotient algebra you are effectively eliminating any part of the tensor product which is not related to exterior or orthogonal structure. What is left has a clear geometric significance. In a way, the geometric product does a lot of work for you behind the curtains, so that you can concentrate on the relevant geometric structures. The expression $uv=u\cdot v+u\wedge v$ is not really the definition of the product. It is just a property that the geometric product of two vectors has.
Alan Macdonald does not use the definition of the geometric product I described above because he does not presume his readers are familiar with the tensor algebra, ideals, or quotients. Instead, he wants to concentrate on applications, geometric properties of the algebra, and on computation. If you are not satisfied with his approach, perhaps you need to read another book. This one
Clifford Algebras and Lie Theory, by Meinrenken
is recent and it uses the same definition I used. There are other equivalent ways to define Clifford algebras. If you are interested, check out these books as well
Quadratic Mappings and Clifford Algebras, by Helmstetter and Micali,
Clifford Algebras: An introduction, by Garling.
Perhaps, after trying to read these books you will appreciate Alan's book more.
Clifford algebra is a well-established part of standard mathematics. It is used in differential geometry and Clifford Analysis, not to mention various applications in physics. No one questions its validity. People who refer to it as Geometric algebra simply want to help promote it in engineering, applied mathematics, and physics. The focus is on applications rather than mathematical rigour. As Alan has pointed out, you don't need to know how the algebra is defined in general in order to use it. You can always compute the product in the basis. It gets tedious to do it by hand as the dimension of the underlying vector space increases, but it can be implemented on a computer quite easily.
Best Answer
Let me address this more on the side of how linear algebra is presented in some GA material.
In traditional linear algebra, you use a lot of matrices and column/row vectors because this gives you an easy way to compute the action of a linear map or operator on a vector. What I want to emphasize is that this is a representation. It's a way of talking about linear maps, but it's not the only way.
In GA, there are reasons we don't often use matrices explicitly. One reason is that we have a natural extension of a linear operator to all kinds of blades, not just vectors. If you have a linear operator $\underline T$, and you want to compute its action on a bivector $a \wedge b$ with matrices, you would have to compute a totally different matrix from the one you would use just considering $\underline T$ acting on a vector (this matrix's components would describe its action on basis bivectors, not basis vectors). This is one reason why matrices become rather useless.
Thus, since we tend to look at linear maps and operators merely as linear functions, we have to develop ways to talk about common linear algebra concepts without reference to matrices at all. This is how we talk about a basis-independent of the determinant using the pseudoscalar $I$, saying $\underline T(I) = I \det \underline T$ for instance. Texts on GA and GC also develop ways to talk about traces and other interesting linear algebra concepts without reference to matrices.
With all that in mind, since we don't talk about matrices when doing linear algebra in GA, we don't have to think about geometric products of matrices. We just talk about compositions of maps (which would be represented through matrix multiplication) when applying several maps in succession.