The conceptual idea behind raising and lowering indices

differential-geometrygeneral-relativityspecial-relativitytensors

I've been watching Eigenchris' playlists on Tensors for beginners and Tensor calculus. His videos really clear up a lot of concepts. In the last video of the Tensor for beginners series, he talks about the motivation behind raising and lowering indices.

At minute 7:38, we "introduces a new notation" in which the index magically went down:

enter image description here

But he doesn't really explain much about that change, and goes on without much comment after that.

What I have in mind is that, as we carry the summation, we end up with terms from the metric where i=/=j which turn out to be zero in some cases, but not all of them, right? After he removes the vanishing terms we just adjust the index to be able to carry a sum with epsilon, but that's just a thought I had.

Thanks for any feedback you may provide.

A bit of context from the video: He uses an incomplete (or dot product with an empty slot) as a one-form to motive the use of the metric as a tool to raise/lower indices.

Best Answer

As Ivo Terek said in the comments, that's just notation. Eigenchris says, "rather than writing $g_{ij}v^j$ I'm going to write $v_i$". That means $v_i$ is a shorthand notation for $g_{ij}v^j$.

Now, what's the motivation for this choice of notation?

For every vector $v$, the operator $g(v,-)$ is a covector (it eats a vector $w$ and spits out a scalar $g(v,w)$)) and, as such, it has some components with respect to the dual basis $\{\epsilon_i\}$ of the space of covectors. It just so happens that the components of $g(v,-)$ are precisely the $g_{ij}v^j$. That's what the equation $$g(v,-) = g_{ij}v^j\epsilon^i$$ means. In that sense, you could write something like $$(g(v,-))_i=g_{ij}v^j$$ but, since the covector $g(v,-)$ is so closely related to the vector $v$, it is customary to write its components $(g(v,-))_i$ as $v_i$. That's why you have $$g_{ij}v^j=(g(v,-))_i=v_i$$

That's why, right after that, he says "it's almost as if the metric tensor components are lowering the index of $v^j$ to give the covector components of $g(v,-)$...". So one writes $v_i=(g(v,-))_i$ for the sole purpose of having the mnemonic $v_i=g_{ij}v^j$.

Personally, I don't use that notation and write $g_{ij}v^j$ every time. That's just a question of taste.

Related Question