I just want to point out that GA can be used to make covariant multivectors (or differential forms) on $\mathbb R^n$ without forcing a metric onto it. In other words, the distinction between vectors and covectors (or between $\mathbb R^n$ and its dual) can be maintained.
This is done with a pseudo-Euclidean space $\mathbb R^{n,n}$.
Take an orthonormal set of spacelike vectors $\{\sigma_i\}$ (which square to ${^+}1$) and timelike vectors $\{\tau_i\}$ (which square to ${^-}1$). Define null vectors
$$\Big\{\nu_i=\frac{\sigma_i+\tau_i}{\sqrt2}\Big\}$$
$$\Big\{\mu_i=\frac{\sigma_i-\tau_i}{\sqrt2}\Big\};$$
they're null because
$${\nu_i}^2=\frac{{\sigma_i}^2+2\sigma_i\cdot\tau_i+{\tau_i}^2}{2}=\frac{(1)+2(0)+({^-}1)}{2}=0$$
$${\mu_i}^2=\frac{{\sigma_i}^2-2\sigma_i\cdot\tau_i+{\tau_i}^2}{2}=\frac{(1)-2(0)+({^-}1)}{2}=0.$$
More generally,
$$\nu_i\cdot\nu_j=\frac{\sigma_i\cdot\sigma_j+\sigma_i\cdot\tau_j+\tau_i\cdot\sigma_j+\tau_i\cdot\tau_j}{2}=\frac{(\delta_{i,j})+0+0+({^-}\delta_{i,j})}{2}=0$$
and
$$\mu_i\cdot\mu_j=0.$$
So the spaces spanned by $\{\nu_i\}$ or $\{\mu_i\}$ each have degenerate quadratic forms. But the dot product between them is non-degenerate:
$$\nu_i\cdot\mu_i=\frac{\sigma_i\cdot\sigma_i-\sigma_i\cdot\tau_i+\tau_i\cdot\sigma_i-\tau_i\cdot\tau_i}{2}=\frac{(1)-0+0-({^-}1)}{2}=1$$
$$\nu_i\cdot\mu_j=\frac{\sigma_i\cdot\sigma_j-\sigma_i\cdot\tau_j+\tau_i\cdot\sigma_j-\tau_i\cdot\tau_j}{2}=\frac{(\delta_{i,j})-0+0-({^-}\delta_{i,j})}{2}=\delta_{i,j}$$
Of course, we could have just started with the definition that $\mu_i\cdot\nu_j=\delta_{i,j}=\nu_i\cdot\mu_j$, and $\nu_i\cdot\nu_j=0=\mu_i\cdot\mu_j$, instead of going through "spacetime".
The space $V$ will be generated by $\{\nu_i\}$, and its dual $V^*$ by $\{\mu_i=\nu^i\}$. You can take the dot product of something in $V^*$ with something in $V$, which will be a differential 1-form. You can make contravariant multivectors from wedge products of things in $V$, and covariant multivectors from wedge products of things in $V^*$.
You can also take the wedge product of something in $V^*$ with something in $V$.
$$\mu_i\wedge\nu_i=\frac{\sigma_i\wedge\sigma_i+\sigma_i\wedge\tau_i-\tau_i\wedge\sigma_i-\tau_i\wedge\tau_i}{2}=\frac{0+\sigma_i\tau_i-\tau_i\sigma_i-0}{2}=\sigma_i\wedge\tau_i$$
$$\mu_i\wedge\nu_j=\frac{\sigma_i\sigma_j+\sigma_i\tau_j-\tau_i\sigma_j-\tau_i\tau_j}{2},\quad i\neq j$$
What does this mean? ...I suppose it could be a matrix (a mixed variance tensor)!
A matrix can be defined as a bivector:
$$M = \sum_{i,j} M^i\!_j\;\nu_i\wedge\mu_j = \sum_{i,j} M^i\!_j\;\nu_i\wedge\nu^j$$
where each $M^i_j$ is a scalar. Note that $(\nu_i\wedge\mu_j)\neq{^-}(\nu_j\wedge\mu_i)$, so $M$ is not necessarily antisymmetric. The corresponding linear function $f:V\to V$ is (with $\cdot$ the "fat dot product")
$$f(x) = M\cdot x = \frac{Mx-xM}{2}$$
$$= \sum_{i,j} M^i_j(\nu_i\wedge\mu_j)\cdot\sum_k x^k\nu_k$$
$$= \sum_{i,j,k} M^i_jx^k\frac{\nu_i\mu_j-\mu_j\nu_i}{2}\cdot\nu_k$$
$$= \sum_{i,j,k} M^i_jx^k\frac{(\nu_i\mu_j)\nu_k-\nu_k(\nu_i\mu_j)-(\mu_j\nu_i)\nu_k+\nu_k(\mu_j\nu_i)}{4}$$
(the $\nu$'s anticommute because their dot product is zero:)
$$= \sum_{i,j,k} M^i_jx^k\frac{\nu_i\mu_j\nu_k+\nu_i\nu_k\mu_j+\mu_j\nu_k\nu_i+\nu_k\mu_j\nu_i}{4}$$
$$= \sum_{i,j,k} M^i_jx^k\frac{\nu_i(\mu_j\nu_k+\nu_k\mu_j)+(\mu_j\nu_k+\nu_k\mu_j)\nu_i}{4}$$
$$= \sum_{i,j,k} M^i_jx^k\frac{\nu_i(\mu_j\cdot\nu_k)+(\mu_j\cdot\nu_k)\nu_i}{2}$$
$$= \sum_{i,j,k} M^i_jx^k\frac{\nu_i(\delta_{j,k})+(\delta_{j,k})\nu_i}{2}$$
$$= \sum_{i,j,k} M^i_jx^k\big(\delta_{j,k}\nu_i\big)$$
$$= \sum_{i,j} M^i_jx^j\nu_i$$
This agrees with the conventional definition of matrix multiplication.
In fact, it even works for non-square matrices; the above calculations work the same if the $\nu_i$'s on the left in $M$ are basis vectors for a different space. A bonus is that it also works for a non-degenerate quadratic form; the calculations don't rely on ${\mu_i}^2=0$, nor ${\nu_i}^2=0$, but only on $\nu_i$ being orthogonal to $\nu_k$, and $\mu_j$ being reciprocal to $\nu_k$. So you could instead have $\mu_j$ (the right factors in $M$) be in the same space as $\nu_k$ (the generators of $x$), and $\nu_i$ (the left factors in $M$) in a different space. A downside is that it won't map a non-degenerate space to itself.
I admit that this is worse than the standard matrix algebra; the dot product is not invertible, nor associative. Still, it's good to have this connection between the different algebras. And it's interesting to think of a matrix as a bivector that "rotates" a vector through the dual space and back to a different point in the original space (or a new space).
Speaking of matrix transformations, I should discuss the underlying principle for "contra/co variance": that the basis vectors may vary.
We want to be able to take any (invertible) linear transformation of the null space $V$, and expect that the opposite transformation applies to $V^*$. Arbitrary linear transformations of the external $\mathbb R^{n,n}$ will not preserve $V$; the transformed $\nu_i$ may not be null. It suffices to consider transformations that preserve the dot product on $\mathbb R^{n,n}$. One obvious type is the hyperbolic rotation
$$\sigma_1\mapsto\sigma_1\cosh\phi+\tau_1\sinh\phi={\sigma_1}'$$
$$\tau_1\mapsto\sigma_1\sinh\phi+\tau_1\cosh\phi={\tau_1}'$$
$$\sigma_2={\sigma_2}',\quad\sigma_3={\sigma_3}',\quad\cdots$$
$$\tau_2={\tau_2}',\quad\tau_3={\tau_3}',\quad\cdots$$
(or, more compactly, $x\mapsto\exp(-\sigma_1\tau_1\phi/2)x\exp(\sigma_1\tau_1\phi/2)$ ).
The induced transformation of the null vectors is
$${\nu_1}'=\frac{{\sigma_1}'+{\tau_1}'}{\sqrt2}=\exp(\phi)\nu_1$$
$${\mu_1}'=\frac{{\sigma_1}'-{\tau_1}'}{\sqrt2}=\exp(-\phi)\mu_1$$
$${\nu_2}'=\nu_2,\quad{\nu_3}'=\nu_3,\quad\cdots$$
$${\mu_2}'=\mu_2,\quad{\mu_3}'=\mu_3,\quad\cdots$$
The vector $\nu_1$ is multiplied by some positive number $e^\phi$, and the covector $\mu_1$ is divided by the same number. The dot product is still ${\mu_1}'\cdot{\nu_1}'=1$.
You can get a negative multiplier for $\nu_1$ simply by the inversion $\sigma_1\mapsto{^-}\sigma_1,\quad\tau_1\mapsto{^-}\tau_1$; this will also negate $\mu_1$. The result is that you can multiply $\nu_1$ by any non-zero Real number, and $\mu_1$ will be divided by the same number.
Of course, this only varies one basis vector in one direction. You could try to rotate the vectors, but a simple rotation in a $\sigma_i\sigma_j$ plane will mix $V$ and $V^*$ together. This problem is solved by an isoclinic rotation in $\sigma_i\sigma_j$ and $\tau_i\tau_j$, which causes the same rotation in $\nu_i\nu_j$ and $\mu_i\mu_j$ (while keeping them separate).
Combine these stretches, reflections, and rotations, and you can generate any invertible linear transformation on $V$, all while maintaining the degeneracy ${\nu_i}^2=0$ and the duality $\mu_i\cdot\nu_j=\delta_{i,j}$. This shows that $V$ and $V^*$ do have the correct "variance".
See also Hestenes' Tutorial, page 5 ("Quadratic forms vs contractions").
Exterior algebra
Exterior algebra defines an antisymmetric wedge product. An example of the wedge product of two unit vectors, called a two-form, is
$$\mathbf{e}_1 \wedge \mathbf{e}_2 = -\mathbf{e}_2 \wedge \mathbf{e}_1.$$
An example of a wedge product of three (unit) vectors, a three-form, is
$$\begin{aligned}\mathbf{e}_1 \wedge \mathbf{e}_2 \wedge \mathbf{e}_3 &= -\mathbf{e}_2 \wedge \mathbf{e}_1 \wedge \mathbf{e}_3 \\ &= \mathbf{e}_2 \wedge \mathbf{e}_3 \wedge \mathbf{e}_1 \\ &= -\mathbf{e}_3 \wedge \mathbf{e}_2 \wedge \mathbf{e}_1.\end{aligned}$$
A consequence of this antisymmetry is that any wedge product where one of the wedged vectors is colinear with another is zero.
Exterior algebra also has the concept of duality, which provides a mapping between k-forms and N-k forms, where N is the dimension of the underlying vector space. For example, in a three dimensional Euclidean space the dual of the two form $ \mathbf{e}_1 \wedge \mathbf{e}_2 $, denoted $ *\left( { \mathbf{e}_1 \wedge \mathbf{e}_2} \right) $ is the quantity
$$*\left( {\mathbf{e}_1 \wedge \mathbf{e}_2} \right) \wedge \left( { \mathbf{e}_1 \wedge \mathbf{e}_2} \right) = \mathbf{e}_1 \wedge \mathbf{e}_2 \wedge \mathbf{e}_3,$$
so
$$*\left( {\mathbf{e}_1 \wedge \mathbf{e}_2} \right) = \mathbf{e}_3.$$
I believe that Grassmann algebras have the same structure as exterior algebras, but also define a regressive product related to the exterior algebra dual.
Geometric algebra
In an exterior algebra, one can add k-forms to other k-forms, but would not add forms of different rank. This restriction is relaxed in geometric algebra (GA), where a quantity such as
$$1 + 2 \mathbf{e}_1 + 3 \mathbf{e}_2 \wedge \mathbf{e}_4 + 5 \mathbf{e}_1 \wedge \mathbf{e}_2 \wedge \mathbf{e}_4,$$
is perfectly well formed. The geometric algebra is built up of products of vectors, where the vector product is defined as an associative product
$$\mathbf{a} (\mathbf{b} \mathbf{c}) = (\mathbf{a} \mathbf{b}) \mathbf{c} = \mathbf{a} \mathbf{b} \mathbf{c},$$
and where the product of a vector with itself is defined as the squared length of that vector
$$\mathbf{a} \mathbf{a} = \mathbf{a} \cdot \mathbf{a} = \left\lvert {\mathbf{a}} \right\rvert^2.$$
In an Euclidean space such length is always positive, but that mixed sign length metrics (such as that of the Minkowski space used in special relativity) are also allowed.
The product of two non-colinear vectors can be factored as
$$\mathbf{a} \mathbf{b} = \frac{1}{{2}} \left( { \mathbf{a} \mathbf{b} + \mathbf{b} \mathbf{a} } \right) + \frac{1}{{2}} \left( { \mathbf{a} \mathbf{b} - \mathbf{b} \mathbf{a} } \right).$$
The first (symmetric) term can be identified with the dot-product, whereas the second completely antisymmetric term can be identified as with the wedge product, so this complete vector product is denoted
$$\mathbf{a} \mathbf{b} = \mathbf{a} \cdot \mathbf{b} + \mathbf{a} \wedge \mathbf{b}.$$
This is one of the simplest examples of what is called a multivector in GA, containing the sum of a scalar (grade zero) and a bivector (grade two).
There are a number of other consequences of the product axioms of GA. One such consequence is that the product of two perpendicular vectors is antisymmetric, and that any unit vector has a unit square. A number of specific algebraic structures can be represented with geometric algebras. For example, one can identify the algebra spanned by a scalar and unit bivector, such as
$$\text{span} \left\{ { 1, \mathbf{e}_1 \mathbf{e}_2 } \right\}$$
with complex numbers. This is because any unit bivector of this form (in a Euclidean space) squares to unity
$$\begin{aligned}(\mathbf{e}_1 \mathbf{e}_2)^2 &= (\mathbf{e}_1 \mathbf{e}_2)(\mathbf{e}_1 \mathbf{e}_2) \\ &= \mathbf{e}_1 (\mathbf{e}_2 \mathbf{e}_1) \mathbf{e}_2 \\ &= -\mathbf{e}_1 (\mathbf{e}_1 \mathbf{e}_2) \mathbf{e}_2 \\ &= -(\mathbf{e}_1 \mathbf{e}_1) (\mathbf{e}_2 \mathbf{e}_2) \\ &= - (1)(1) \\ &= -1.\end{aligned}$$
Other examples of algebraic structures that can have GA representations include quaternions, the Pauli (spin) algebra of quantum mechanics, and the Dirac algebra from QED.
The GA representation of dual vectors is through multiplication by a (unit) pseudoscalar (an ordered product of all the unit vectors of the space), often denoted $ I $, for the vector space. For example, negative multiplication by the three dimensional pseudoscalar has the duality property illustrated in the exterior algebra duality example
$$\begin{aligned}-I \mathbf{e}_1&=-\mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3 \mathbf{e}_1 \\ &=\mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_1 \mathbf{e}_3 \\ &=\mathbf{e}_2 \mathbf{e}_3,\end{aligned}$$
$$\begin{aligned}-I\mathbf{e}_2 \mathbf{e}_3&=- \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3 \mathbf{e}_2 \mathbf{e}_3 \\ &=\mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_2 \mathbf{e}_3 \mathbf{e}_3 \\ &=\mathbf{e}_1.\end{aligned}$$
A number of fundamental geometric operations, such as projection, rotation, and reflection can all be represented using GA multivector product operations.
Clifford algebra
In GA the basis vectors for the space are typically real valued vectors. Complex valued vectors have uses in GA (i.e. frequency domain representation of vectors in electrodynamics), but the underlying basis for the vector space is still real valued (i.e. $\text{span} \left\{ { \mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3 } \right\}$ ).
Clifford algebras provide a further generalization, allowing those basis vectors to reside in a complex vector space, with suitable modifications of the vector product rules.
Multilinear
All of these algebras are linear algebras. For example, in an exterior algebra
$$\mathbf{a} \wedge (\alpha \mathbf{b} + \beta \mathbf{c}) = \alpha \mathbf{a} \wedge \mathbf{b} + \beta \mathbf{a} \wedge \mathbf{c},$$
$$(\alpha \mathbf{b} + \beta \mathbf{c})\wedge \mathbf{a} = \alpha \mathbf{b} \wedge \mathbf{a} + \beta \mathbf{c} \wedge \mathbf{a},$$
or in GA
$$\mathbf{a} \left( { \alpha \mathbf{b} + \beta \mathbf{c} \mathbf{d} } \right)= \alpha \mathbf{a} \mathbf{b} + \beta \mathbf{a} \mathbf{c} \mathbf{d}.$$
$$\left( { \alpha \mathbf{b} + \beta \mathbf{c} \mathbf{d} } \right) \mathbf{a} = \alpha \mathbf{b} \mathbf{a} + \beta \mathbf{c} \mathbf{d}\mathbf{a}.$$
Best Answer
This is correct.
This is also correct.
This is also correct. Formally, the exterior and tensor algebras are ($\mathbb{Z}_{\ge 0}$-)graded algebras, meaning exactly that every element is a sum of homogeneous elements of various degrees and that a product of a homogeneous element of degree $k$ and a homogeneous element of degree $\ell$ is a homogeneous element of degree $k + \ell$. Clifford algebras do not have this property, but they are $\mathbb{Z}_2$-graded; there is still a "$\bmod 2$" degree that allows you to write every element as a sum of "even" and "odd" components, and the product of an even element with an even element is even, etc.