Differential Geometry – Taking the Dot Product in Polar Coordinates Using the Metric Tensor

coordinate systemsdifferential-geometrygeneral-relativityvectors

In elementary vectors class, we learn a nice formula for the dot product of two vectors,

$$\mathbf{a}\cdot\mathbf{b}=|\mathbf{a}||\mathbf{b}|\cos\theta,\tag{1}$$ where $\theta$ is the angle between the two vectors. Let's work in 2D to keep things simple.

In general relativity (and presumably in differential geometry), we learn that the scalar product of two vectors is given by

$$\mathbf{a}\cdot\mathbf{b}=a_\mu b^\mu=g_{\mu\nu}a^\nu b^\mu=g_{11}a^1b^1+g_{12}a^1b^2+g_{21}a^2b^1+g_{22}a^2b^2,\tag{2}$$

where $g_{\mu\nu}$ is the metric tensor. Both formulas agree when we use Cartesian coordinates. What about polar coordinates? For flat space in polar coordinates, the metric is

$$g_{\mu\nu}=\begin{bmatrix}1 & 0\\0 & r^2\end{bmatrix}.$$

(Right?) Writing our vectors component-wise in polar coordinates, $\mathbf{a}=\left(r_a,\theta_a\right),\,\mathbf{b}=(r_b,\theta_b),$ and naively applying the metric-based formula,

$$\mathbf{a}\cdot\mathbf{b}=r_ar_b+r^2\theta_a\theta_b.$$

Clearly this is incorrect, and inconsistent with $(1)$. As discussed in the following posts, 1, 2, the issue stems from confusing a point $(r,\theta),$ with a vector from the origin pointing to $(r,\theta).$ Writing our vectors as tuples $\mathbf{a}=\left(r_a,\theta_a\right)$ is incorrect.

Clearly, if we have two arbitrary vectors in 2D euclidean space, we can talk about the angle between them, and so the $(1)$ should still hold even if we are working in a different coordinate system.

How can $(2)$ be applied to vectors in a polar coordinate system, such that it reduces to $(1)$? How must we write our vector-components for $\mathbf{a}$ such that $(2)$ can be applied?

Best Answer

A "warning": there are a lot of subtle issues here, but in order to not write a textbook, I tried to limit myself to emphasize only some of the issues. Also, the organization definitely isn't perfect, but hopefully this clarifies some issues.


My first remark is more of a "sanity check". The notion of a metric tensor field $g$ is meant to generalize the familiar concept of dot product in Euclidean spaces to arbitrary smooth manifolds. So, obviously, if this generalization is to be useful in any reasonable sense, it better reproduce the old results. Second, changing coordinates is a completely "artificial" idea in the following sense: the naive definition of a vector is "an arrow with a certain magnitude and direction, emanating from a certain point".

Well, this "definition" can certainly be made more precise, but here's the key point: an arrow is an arrow! The arrow doesn't know anything about coordinates or components with respect to a basis, so it doesn't care whether you use cartesian/polar/elliptical/parabolic/hyperbolic or any other coordinate system. If you do the math properly, you should be describing the vector equally well in any coordinate system.


(I'll not write boldface simply for ease of typing). First, we recall the following definition:

Definition: A (Riemannian) metric tensor field on a smooth manifold $M$ is a map which assigns, in a "smooth" way, to each point $p \in M$ an inner product $g_p$ on the tangent space $T_pM$.

In your case, we shall specialize to the case $M = \Bbb{R}^2$, and when $g$ is the "standard metric". Now a point in $M$ is simply a tuple of numbers. In polar coordinates, we can specify a point using a radius $r$ and an angle $\theta$ as follows: $p = (r \cos \theta, r \sin \theta)$ (don't think of this as the cartesian components or "cartesian representation of polar coordinates" or anything else...this is simply a tuple of real numbers, and hence it is a point in $\Bbb{R}^2$. That's it.)

Now, let $a,b \in T_pM$ (i.e arrows which start at $p$). Now, note that $T_pM$ is a $2$-dimensional vector space, and hence it is spanned by two linearly independent vectors. There are several bases we can choose, but here are two of them: $\left\{\dfrac{\partial}{\partial x}\bigg|_p, \dfrac{\partial}{\partial y} \bigg|_p\right\}$ and $\left\{\dfrac{\partial}{\partial r}\bigg|_p, \dfrac{\partial}{\partial \theta}\bigg|_p\right\}$. What this means is that the vector $a$ can be written as \begin{align} a &= x_a \dfrac{\partial}{\partial x}\bigg|_p + y_a \dfrac{\partial}{\partial y}\bigg|_p \end{align} for some $x_a, y_a \in \Bbb{R}$. But there is nothing special with this basis, so we could just as well write \begin{align} a &= r_a \dfrac{\partial}{\partial r}\bigg|_p + \theta_a \dfrac{\partial}{\partial \theta}\bigg|_p \tag{$*$} \end{align} for some $r_a, \theta_a \in \Bbb{R}$. Note that the numbers $x_a, y_a, r_a, \theta_a$ are simply expansion coefficients when writing a vector relative to a basis! They by themselves do not have any physical/geometrical meaning it is the actual vectors $a,b, \dfrac{\partial}{\partial x}\bigg|_p...$ which have physical meaning, and it is the metric $g$ which contains all the geometric information about the space $M$ (if this point is unclear, you should revisit some linear algebra).

Now, let's figure out what $x_a$ and $y_a$ are. Use the fact that \begin{align} \dfrac{\partial}{\partial r}\bigg|_p &= \dfrac{\partial x}{\partial r}\bigg|_p \dfrac{\partial}{\partial x}\bigg|_p + \dfrac{\partial y}{\partial r}\bigg|_p\dfrac{\partial}{\partial y}\bigg|_p \\ &= \cos \theta \dfrac{\partial}{\partial x}\bigg|_p + \sin \theta \dfrac{\partial}{\partial y}\bigg|_p \end{align} and

\begin{align} \dfrac{\partial}{\partial \theta}\bigg|_p &= \dfrac{\partial x}{\partial \theta}\bigg|_p \dfrac{\partial}{\partial x}\bigg|_p + \dfrac{\partial y}{\partial \theta}\bigg|_p\dfrac{\partial}{\partial y}\bigg|_p \\ &= -r \sin \theta \dfrac{\partial}{\partial x}\bigg|_p + r\cos \theta \dfrac{\partial}{\partial y}\bigg|_p \end{align}

Plugging these equations into $(*)$ shows that \begin{align} a &= \left(r_a \cos \theta - r \theta_a \sin \theta \right) \dfrac{\partial}{\partial x} \bigg|_p + \left(r_a \sin \theta + r \theta_a \cos \theta \right) \dfrac{\partial}{\partial y} \bigg|_p \tag{$\ddot \smile$} \end{align} A similar equation holds for $b$ (just replace all $a$'s with $b$). Once again for emphasis: $r, \theta$ describe the point $p = (r \cos \theta, r \sin \theta) \in \Bbb{R}^2$ which is a distance $r$ from the origin, and an angle $\theta$ in the usual sense, whereas $r_a, \theta_a$ are simply the expansion coefficients of the vector $a \in T_pM$ relative to a specific basis. So, $r_a$ is NOT the length of the vector $a \in T_pM$, and $\theta_a$ is NOT the angle the vector $a$ makes!

So, what is the length of vector $a$? By definition, it is $\sqrt{g_p(a,a)}$ (the square root of the inner product of $a$ with itself). Now, \begin{align} \lVert a\rVert &= \sqrt{g_p(a,a)} \\ &= \sqrt{r_a^2 + r^2 \theta_a^2} \end{align} Similarly for $b$ (by the way, as a sanity check, verify for yourself that taking the sum of the squares of the expansion in $\ddot{\smile}$ is what is under the square root above).

What is the angle $\alpha_a$ which the vector $a$ makes with the positive $x$-axis? Well, as I mentioned in the comments, the notion of angle may seem slightly circular (but it isn't). But for the sake of routine computation, let's go with the flow. To avoid taking inverse trig functions, note that from the basis expansion $(\ddot{\smile})$, we have \begin{align} \cos(\alpha_a) &= \dfrac{r_a \cos \theta - r \theta_a \sin \theta}{\sqrt{r_a^2 + r^2 \theta_a^2}} \quad \text{and} \quad \sin(\alpha_a) = \dfrac{r_a \sin \theta + r \theta_a \cos \theta}{ \sqrt{r_a^2 + r^2 \theta_a^2}} \end{align} (what I'm actually doing here is using the fact that $T_pM = T_p \Bbb{R}^2$ has a canonical isomorphism with $\Bbb{R}^2$ as an inner product space, where the isomorphism is given by $\xi\dfrac{\partial}{\partial x} \bigg|_p + \eta\dfrac{\partial}{\partial y} \bigg|_p \mapsto (\xi,\eta)$.) A similar thing holds for the vector $b$.

So, finally, we can compute. Let $\alpha = \alpha_a - \alpha_b$; THIS is the angle between the vectors $a$ and $b$: \begin{align} \lVert a \rVert \lVert b \rVert \cos(\alpha_a - \alpha_b) &= \lVert a \rVert \lVert b \rVert \left( \cos \alpha_a \cos\alpha_b + \sin \alpha_a \sin \alpha_b \right) \\ & \dots \\ &= r_ar_b + r^2 \theta_a \theta_b \end{align} I have already given all the relevant formulas, so I leave it to you to plug in everything and verify the algebra in the steps $\dots$ I omitted.


So, to summarize, my claim is that really, there is actually nothing to be proven here, because the very notion of "angle between two vectors" is defined such that the inner product formula holds. But hopefully my answer above highlights some of the subtleties, most important of which is that $r_a, \theta_a$ (and $x_a, y_a$) are merely expansion coefficients with respect to a chosen basis, which means they tell you very roughly speaking "how much the vector $a$ points in the radial direction, and the angular direction", and that they are NOT the length of the vector and the angle it makes.

Also, it is very crucial to realize that these basis expansion coefficients $x_a, \dots, \theta_a$ by themselves do not have any meaning, it is only the vectors $a,b, \dfrac{\partial}{\partial x}\bigg|_p \dots, \dfrac{\partial}{\partial \theta}\bigg|_p$ which have meaning, and that it is the metric $g$ which encodes all the geometry of the manifold $M$ in question.