I'm taking an undergrad GR course, and our text (Lambourne) mentions covariant and contravariant vectors and tensors ad-nauseum, but never really gives a formal definition for what they are, and how they are unique from each other in any physical sense (other than their difference in transformations). Is there any physical intuition behind these two labels? There should be, right? If they differ in how they transform with transformation of coordinates, doesnt that indicate that there has to be some way of visualizing their difference, since coordinate transformations are easily visualized?

# [Physics] Is the covariance or contravariance of vectors/tensors something that can be “visualized”

tensor-calculusvectors

#### Related Solutions

This is not really an answer to your question, essentially because there isn't (currently) a question in your post, but it is too long for a comment.

Your statement that

A co-ordinate transformation is linear map from a vector to itself with a change of basis.

is muddled and ultimately incorrect. Take some vector space $V$ and two bases $\beta$ and $\gamma$ for $V$. Each of these bases can be used to establish a representation map $r_\beta:\mathbb R^n\to V$, given by
$$r_\beta(v)=\sum_{j=1}^nv_j e_j$$
if $v=(v_1,\ldots,v_n)$ and $\beta=\{e_1,\ldots,e_n\}$. The coordinate transformation is **not** a linear map from $V$ to itself. Instead, it is the map
$$r_\gamma^{-1}\circ r_\beta:\mathbb R^n\to\mathbb R^n,\tag 1$$
and takes coordinates to coordinates.

Now, to go to the heart of your confusion, it should be stressed that **covectors are not members of $V$**; as such, the representation maps do not apply to them directly in any way. Instead, they belong to the *dual space* $V^\ast$, which I'm hoping you're familiar with. (In general, I would strongly discourage you from reading texts that pretend to lay down the law on the distinction between vectors and covectors without talking at length about the dual space.)

The dual space is the vector space of all linear functionals from $V$ into its scalar field: $$V=\{\varphi:V\to\mathbb R:\varphi\text{ is linear}\}.$$ This has the same dimension as $V$, and any basis $\beta$ has a unique dual basis $\beta^*=\{\varphi_1,\ldots,\varphi_n\}$ characterized by $\varphi_i(e_j)=\delta_{ij}$. Since it is a different basis to $\beta$, it is not surprising that the corresponding representation map is different.

To lift the representation map to the dual vector space, one needs the notion of the adjoint of a linear map. As it happens, there is in general no way to lift a linear map $L:V\to W$ to a map from $V^*$ to $W^*$; instead, one needs to reverse the arrow. Given such a map, a functional $f\in W^*$ and a vector $v\in V$, there is only one combination which makes sense, which is $f(L(v))$. The mapping $$v\mapsto f(L(v))$$ is a linear mapping from $V$ into $\mathbb R$, and it's therefore in $V^*$. It is denoted by $L^*(f)$, and defines the action of the adjoint $$L^*:W^*\to V^*.$$

If you apply this to the representation maps on $V$, you get the adjoints $r_\beta^*:V^*\to\mathbb R^{n,*}$, where the latter is canonically equivalent to $\mathbb R^n$ because it has a canonical basis. The inverse of this map, $(r_\beta^*)^{-1}$, is the representation map $r_{\beta^*}:\mathbb R^n\cong\mathbb R^{n,*}\to V^*$. This is the origin of the 'inverse transpose' rule for transforming covectors.

To get the transformation rule for covectors between two bases, you need to string two of these together: $$ \left((r_\gamma^*)^{-1}\right)^{-1}\circ(r_\beta^*)^{-1}=r_\gamma^*\circ (r_\beta^*)^{-1}:\mathbb R^n\to \mathbb R^n, $$ which is very different to the one for vectors, (1).

Still think that vectors and covectors are the same thing?

Addendum

Let me, finally, address another misconception in your question:

An inner product is between elements of the same vector space and not between two vector spaces, it is not how it is defined.

Inner products are indeed defined by taking both inputs from the same vector space. Nevertheless, it is still perfectly possible to define a bilinear form $\langle \cdot,\cdot\rangle:V^*\times V\to\mathbb R$ which takes one covector and one vector to give a scalar; it is simple the action of the former on the latter:
$$\langle\varphi,v\rangle=\varphi(v).$$
This bilinear form is always guaranteed and presupposes strictly *less* structure than an inner product. This is the 'inner product' which reads $\varphi_j v^j$ in Einstein notation.

Of course, this does relate to the inner product structure $ \langle \cdot,\cdot\rangle_\text{I.P.}$ on $V$ when there is one. Having such a structure enables one to identify vectors and covectors in a canonical way: given a vector $v$ in $V$, its corresponding covector is the linear functional $$ \begin{align} i(v)=\langle v,\cdot\rangle_\text{I.P.} : V&\longrightarrow\mathbb R \\ w&\mapsto \langle v,w\rangle_\text{I.P.}. \end{align} $$ By construction, both bilinear forms are canonically related, so that the 'inner product' $\langle\cdot,\cdot\rangle$ between $v\in V^*$ and $w\in V$ is exactly the same as the inner product $\langle\cdot,\cdot\rangle_\text{I.P.}$ between $i(v)\in V$ and $w\in V$. That use of language is perfectly justified.

Addendum 2, on your question about the gradient.

I should really try and convince you at this point that the transformation laws are in fact enough to show something is a covector. (The way the argument goes is that one can define a linear functional on $V$ via the form in $\mathbb R^{n*}$ given by the components, and the transformation laws ensure that this form in $V^*$ is independent of the basis; alternatively, given the components $f_\beta,f_\gamma\in\mathbb R^n$ with respect to two basis, the representation maps give the forms $r_{\beta^*}(f_\beta)=r_{\gamma^*}(f_\gamma)\in V^*$, and the two are equal because of the transformation laws.)

However, there is indeed a deeper reason for the fact that the gradient is a covector. Essentially, it is to do with the fact that the equation $$df=\nabla f\cdot dx$$ does not actually need a dot product; instead, it relies on the simpler structure of the dual-primal bilinear form $\langle \cdot,\cdot\rangle$.

To make this precise, consider an arbitrary function $T:\mathbb R^n\to\mathbb R^m$. The derivative of $T$ at $x_0$ is defined to be the (unique) linear map $dT_{x_0}:\mathbb R^n\to\mathbb R^m$ such that
$$
T(x)=T(x_0)+dT_{x_0}(x-x_0)+O(|x-x_0|^2),
$$
if it exists. The gradient is exactly this map; it was *born* as a linear functional, whose coordinates over *any* basis are $\frac{\partial f}{\partial x_j}$ to ensure that the multi-dimensional chain rule,
$$
df=\sum_j \frac{\partial f}{\partial x_j}d x_j,
$$
is satisfied. To make things easier to understand to undergraduates who are fresh out of 1D calculus, this linear map is most often 'dressed up' as the corresponding vector, which is uniquely obtainable through the Euclidean structure, and whose action must therefore go back through that Euclidean structure to get to the original $df$.

Addendum 3.

OK, it is now sort of clear what the main question is (unless that changes again), though it is still not particularly clear in the question text. The thing that needs addressing is stated in the OP's answer in this thread:

the dual vector space is itself a vector space and the fact that it needs to be cast off as a row matrix is based on how we calculate linear maps and not on what linear maps actually are. If I had defined matrix multiplication differently, this wouldn't have happened.

I will also, address, then this question: **given that the dual (/cotangent) space is also a vector space, what forces us to consider it 'distinct' enough from the primal that we display it as row vectors instead of columns, and say its transformation laws are different?**

The main reason for this is well addressed by Christoph in his answer, but I'll expand on it. The notion that something is co- or contra-variant is not well defined 'in vacuum'. Literally, the terms mean "varies with" and "varies against", and they are meaningless unless one says *what* the object in question varies with or against.

In the case of linear algebra, one starts with a given vector space, $V$. The unstated reference is always, by convention, the basis of $V$: covariant objects transform exactly like the basis, and contravariant objects use the transpose-inverse of the basis transformation's coefficient matrix.

One can, of course, turn the tables, and change one's focus to the dual, $W=V^*$, in which case the primal $V$ now becomes the dual, $W^*=V^{**}\cong V$. In this case, quantities that used to transform with the primal basis now transform against the dual basis, and vice versa. This is exactly why we call it the dual: there exists a full duality between the two spaces.

However, as is the case anywhere in mathematics where two fully dual spaces are considered (example, example, example, example, example ), one needs to break this symmetry to get anywhere. There are two classes of objects which behave differently, and a transformation that swaps the two. This has two distinct, related advantages:

- Anything one proves for one set of objects has a dual fact which is automatically proved.
- Therefore, one need only ever prove one version of the statement.

When considering vector transformation laws, one always has (or can have, or should have), in the back of one's mind, the fact that one can rephrase the language in terms of the duality-transformed objects. However, since the *content* of the statements is not altered by the transformation, it is not typically useful to perform the transformation: one needs to state *some* version, and there's not really any point in stating both. Thus, one (arbitrarily, -ish) breaks the symmetry, rolls with that version, and is aware that a dual version of all the development is also possible.

However, this dual version is *not* the same. Covectors can indeed be expressed as row vectors with respect to some basis of covectors, and the coefficients of vectors in $V$ would then vary with the new basis instead of against, but then for each actual implementation, the matrices you would use would of course be duality-transformed. You would have changed the language but not the content.

Finally, it's important to note that even though the dual objects are equivalent, it does not mean they are the same. This why we call them dual, instead of simply saying that they're the same! As regards vector spaces, then, one still has to prove that $V$ and $V^*$ are not only dually-related, but also different. This is made precise in the statement that *there is no natural isomorphism between a vector space and its dual*, which is phrased, and proved in, the language of category theory. The notion of 'natural' isomorphism is tricky, but it would imply the following:

For each vector space $V$, you would have an isomorphism $\sigma_V:V\to V^*$. You would want this isomorphism to play nicely with the duality structure, and in particular with the duals of linear transformations, i.e. their adjoints. That means that for any vector spaces $V,W\in\mathrm{Vect}$ and any linear transformation $T:V\to W$, you would want the diagram

to commute. That is, you would want $T^* \circ \sigma_W \circ T$ to equal $T$.

This is provably not possible to do consistently. The reason for it is that if $V=W$ and is $T$ an isomorphism, then $T$ and $T^*$ are different, but for a simple counter-example you can just take any real multiple of the identity as $T$. This is precisely the formal statement of the intuition in garyp's great answer.

In apples-and-pears languages, what this means is that a general vector space $V$ and its dual $V^*$ are not only dual (in the sense that there exists a transformation that switches them and puts them back when applied twice), but they are also different (in the sense that there is no consistent way of identifying them), which is why the duality language is justified.

I've been rambling for quite a bit, and hopefully at least some of it is helpful. In summary, though, what I think you need to take away is the fact that

Just because dual objects are equivalent it doesn't mean they are the same.

This is also, incidentally, a direct answer to the question title: no, it is not foolish. They are equivalent, but they are still different.

I understand force to be a 1-form, through the following reasoning. Given a time-independent, conservative lagrangian $L$, its differential (a 1-form in the purest sense) is $$ \mathrm{d}L = p_a ~\mathrm{d}\dot{x}^a + f_a~\mathrm{d} x^a $$ where $$ p_a = \frac{\partial L}{\partial \dot{x}^a},~f_a = \frac{\partial L}{\partial x^a}. $$ So the components of this 1-form are the force and the momentum. The momentum and the force are both interpreted as components of a covector for this reason. It shouldn't be surprising that they are the same type, given their relationship from Newton's second law. I also feel like it's sort of natural for momentum to be a 1-form, given it's "dual" nature to position.

Now, to address your edit. Given a metric, any vector can be written as a 1-form. Given that the manifold you are in is *affine*, you can write displacements as vectors. However, nobody every writes down displacement 1-forms. You seem to think this is at odds with the fact that the components of 1-forms transform "covariantly" and those of vectors "contravariantly". Once you use the metric to "lower the index", a vector will transform as a 1-form. Say we go from coordinates $x\rightarrow y$. The metric transforms as
$$
g'_{ab} = \frac{\partial x^c}{\partial y^a}\frac{\partial x^d}{\partial y^b} g_{cd},
$$
and a vector $v$ will transform as
$$
v'^a = \frac{\partial y^a}{\partial x^b} v^b.
$$
Putting these statements together, we see that $v$ with the lowered index transforms as a 1-form should:
$$
v'_a = g'_{ab}v'^b =
\frac{\partial x^c}{\partial y^a}\frac{\partial x^d}{\partial y^b} g_{cd} \frac{\partial y^b}{\partial x^e} v^e =
\frac{\partial x^c}{\partial y^a} \delta_{de} g_{cd} v^e =
\frac{\partial x^c}{\partial y^a} g_{cd} v^d = \frac{\partial x^c}{\partial y^a} v_c.
$$

## Best Answer

This whole business of covariant vs contravariant is very old school. Some very old texts go into ways of visualizing this. I would suggest instead learning about tangent vectors (contravariant) and 1-forms (covariant) and the equivalence between tangent vectors and directional derivatives.

Associate the vector $\vec{v}$ with the derivative operator $\vec{\frac{d}{d\lambda}}$ by saying that there is a curve parameterized by $\lambda$ that has $\vec{v}$ as it's tangent vector.

Similarly, associate to the function $f$ the 1-form $df$. A 1-form is a linear map from tangent vectors onto real numbers. A 1-form $df$ maps a tangent vector $\vec{\frac{d}{d\lambda}}$ to the real number $df \left( \vec{\frac{d}{d\lambda}} \right) \equiv \frac{df}{d\lambda}$.

Once you are comfortable with this idea, you will notice that we can introduce a coordinate system $x^i$ and tangent vectors $\frac{\partial}{\partial x^i}$ and one-forms $dx^i$. Note that from our rule, $dx^i \left( \vec{\frac{\partial}{\partial x^j} } \right) = \delta^i_j$.

You can then parameterize your curve with the functions $x^i(\lambda)$. Note that from the chain rule

$\vec{ \frac{d}{d\lambda} } = \frac{\partial x^i}{\partial \lambda} \vec{\frac{\partial}{\partial x^i}}$

and you can use what we've produced so far to show that

$df = \frac{\partial f}{\partial x^i} dx^i$.

When all is said and done, you can prove that

$df \left( \vec{\frac{d}{d\lambda}} \right) = \frac{\partial x^i}{\partial \lambda} \frac{\partial f}{\partial x^j} \delta_i^j = \frac{df}{d\lambda}$

is coordinate independent, as it should be.

From there on, you can define arbitrary tensors as multilinear maps taking $n$ 1-forms and $m$ vectors onto real numbers. The utility of this construction is that it is very geometrical and at the same time not tied to coordinates (abstract). You also never have to wonder which way a thing transforms, because it's always the natural way.

I recommend you pick up a good book on differential geometry for physicists. Geometrical Methods of Mathematical Physics by Schutz is OK, his GR book is probably more useful. The bible by Misner, Thorne and Wheeler goes into great depth into this business and has handy visualizations of n-forms if you are so inclined.