Vector fields having zero curl wrt two different real inner-products

curlinner-productsvector analysisVector Fields

Let $V:\mathbb{R}^{3}\rightarrow\mathbb{R}^{3}$ be a smooth vector field, let $e(\cdot, \cdot)$ be the standard Euclidean inner-product, and let $g(\cdot, \cdot)$ be a real inner-product that is not a scalar multiple of $e$. If you wish, we can think of $e$ and $g$ as constant Riemannian metrics. In matrix form, $e$ is the identity matrix and $g$ is a symmetric, positive-definite matrix such that it is not a scalar multiple of $e$.

In this post I will work in the standard global coordinates of $\mathbb{R}^{3}$, so everything will be written in terms of those coordinates and vectors will be written in terms of the standard basis. Whether this simplifies things or obscures things is not something I know yet.

If I'm not mistaken, the curl with respect to inner-product $g$ is
$$ \nabla\times_{g} V = \frac{1}{\sqrt{|g|}}\nabla\times_{e} (gV) $$
where $|g| = |\det(g)|$ and $gV$ denotes the matrix multiplication of matrix $g$ by column vector $V$.


Question

If $V:\mathbb{R}^{3}\rightarrow\mathbb{R}^{3}$ is a vector field for which the $e$-curl and $g$-curl are both everywhere zero, can we say something about $V$ that limits the kinds of functions it can be?


Answering the Simplified 2D Analog

I answered my question for the 2D analog case, and I thought it'd be helpful to show my work to give an understanding of the kind of answer I'm looking for.
Let
$$ g = \begin{pmatrix} \alpha & \beta \\ \beta & \gamma \end{pmatrix} \quad\text{ and }\quad e = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} $$
where $g$ is symmetric, positive-definite, constant, and not a scalar multiple of $e$.
I define the 2D $e$-curl by
$$ \nabla\times_{e} V := \partial_{1}V_{2} – \partial_{2}V_{1} $$
and the 2D $g$-curl by
$$ \nabla\times_{g} V := \frac{1}{\sqrt{|g|}} \nabla\times_{e} (gV). $$
Note that both of these output a scalar function (this is expected if you know how differential forms work).
Now suppose $V:\mathbb{R}^{2}\rightarrow\mathbb{R}^{2}$ is a vector field (if we need to specify the differentiability, let's assume it is $C^{\infty}$ so that we don't have to think about it too much) whose $e$-curl and $g$-curl are zero everywhere:
$$ \nabla\times_{e} V = 0 \quad\text{ and }\quad \nabla\times_{g} V = 0. $$
Then we have equations
\begin{align*}
& \partial_{1}V_{2} – \partial_{2}V_{1} = 0, \\
& \partial_{1}(\beta V_{1} + \gamma V_{2}) – \partial_{2}(\alpha V_{1} + \beta V_{2}) = 0.
\end{align*}

To put it another way, the last equation is
\begin{align}\tag{1}
\beta (\partial_{1} V_{1} – \partial_{2}V_{2}) = \alpha \partial_{2} V_{1} – \gamma \partial_{1} V_{2}.
\end{align}

By applying $\partial_{1}$ to $(1)$, we find
$$ \beta (\partial_{11} V_{1} – \partial_{12}V_{2}) = \alpha \partial_{12} V_{1} – \gamma \partial_{11} V_{2}. $$
By applying commutativity of partial derivatives and using the fact that $\partial_{1}V_{2} = \partial_{2}V_{1}$, we have
$$ \beta (\partial_{11} V_{1} – \partial_{22}V_{1}) = (\alpha – \gamma) \partial_{12} V_{1}. $$
Thus we have
\begin{align*}\tag{2}
\beta\partial_{11} V_{1} – (\alpha – \gamma)\partial_{12} V_{1} – \beta\partial_{22} V_{1} = 0.
\end{align*}

Since $g$ is not a scalar multiple of $e$, either $\alpha\ne \gamma$ or $\beta\ne 0$, so not all coefficients of the LHS are zero.

Case 1: $\color{red}{\beta = 0}$.
Then we have $\alpha\ne\gamma$ and so by $(2)$ we have $\partial_{12} V_{1} = 0$. Hence $V_{1}$ is of the form
$$ V_{1}(x, y) = A(x) + B(y) $$
where $A, B:\mathbb{R}\rightarrow\mathbb{R}$. Similar reasoning shows
$$ V_{2}(x, y) = C(x) + D(y) $$
where $C, D:\mathbb{R}\rightarrow\mathbb{R}$.

Case 2: $\color{red}{\beta \ne 0}$.
Then define new coordinates $\xi = ax + by$ and $\eta = cx + dy$ where
\begin{align*}
a &= \frac{\alpha – \gamma}{2} + \sqrt{\left(\frac{\alpha – \gamma}{2}\right)^{2} + \beta^{2}}, \qquad\qquad b = \beta, \\
c &= \frac{\alpha – \gamma}{2} – \sqrt{\left(\frac{\alpha – \gamma}{2}\right)^{2} + \beta^{2}}, \qquad\qquad d = \beta.
\end{align*}

Then $(2)$ reduces to
\begin{align*}\tag{3}
-\beta (T^{2} – 4D) \partial_{\xi\eta}V_{1} = 0
\end{align*}

where $D = \det(g)$ and $T = \text{tr}(g)$. By the spectral theorem for real symmetric matrices, we know $g$ has two real eigenvalues $\lambda_{1}, \lambda_{2}$. If $\lambda_{1} = \lambda_{2}$, then $g$ is a multiple of the identity matrix, which contradicts our hypothesis. By the AM-GM inequality and the fact that $\lambda_{1}, \lambda_{2}$ are distinct, we have $\sqrt{\lambda_{1}\lambda_{2}} < (\lambda_{1} + \lambda_{2})/2$ so then $D < T^{2}/4$. Thus $(3)$ implies $\partial_{\xi\eta}V_{1} = 0$. From this we surmise that
$$ V_{1}(x, y) = A(\xi) + B(\eta) $$
where $A, B:\mathbb{R}\rightarrow\mathbb{R}$. Similar reasoning shows
$$ V_{2}(x, y) = C(\xi) + D(\eta) $$
where $C, D:\mathbb{R}\rightarrow\mathbb{R}$.

Note that $\xi = \vec{x}\cdot\vec{v}_{1}$ and $\eta = \vec{x}\cdot\vec{v}_{2}$ where $\vec{v}_{1} = (a, b)^{T}$ and $\vec{v}_{2} = (c, d)^{T}$ are eigenvectors of $g$.
We see that in all cases, the vector field $V$ is a sum of vector fields that depend only along the axes of the eigenvectors of $g$.


Conjecture

This leads me to conjecture the following.

Conjecture. Let $V:\mathbb{R}^{3}\rightarrow\mathbb{R}^{3}$ be a smooth vector field, let $e = \text{diag}(1, 1, 1)$, and let $g$ be a symmetric, positive-definite matrix that is not a scalar multiple of the identity matrix. If $\nabla\times_{e} V = 0$ and $\nabla\times_{g} V = 0$, then
$$ V(\vec{x}) = A(\vec{x}\cdot\vec{v}_{1}) + B(\vec{x}\cdot\vec{v}_{2}) + C(\vec{x}\cdot\vec{v}_{3})$$
where $A, B, C:\mathbb{R}\rightarrow\mathbb{R}^{3}$ and $\vec{v}_{1}, \vec{v}_{2}, \vec{v}_{3}$ are eigenvectors of $g$.

From another post of mine here, I found that the eigenvectors of $g$ are specifically the vectors that are simultaneously $e$-orthogonal and $g$-orthogonal. I'm not sure if this observation helps, but it makes sense because it shows my conjecture treats $e$ and $g$ "on the same footing."

My approach for the 2D case doesn't seem to generalize to the 3D case, so I'm now wondering, are there any suggestions for how to approach this problem?

Best Answer

Ok, it turns out the other post I linked is crucial to simplifying my problem. The helpful fact from that post is that if we consider $g$ as a matrix, then its eigenvectors form a basis that is simultaneously $e$-orthogonal and $g$-orthogonal. Moreover, we can normalize those eigenvectors according to $e$ so that the transformation from the standard basis to the eigenbasis of $g$ is done by an orthogonal matrix. Then once we write everything in the new basis/coordinates, we are in a situation where both $e$ and $g$ are diagonalized. In that situation, things are a lot simpler.

Note: Any change of basis is meant to be considered in the passive sense. Given bases $\mathcal{E}$ and $\mathcal{P}$, it will be helpful to keep in mind that $[g]_{\mathcal{P}} = P^{T}[g]_{\mathcal{E}}P$ where $P$ is the change of basis matrix such that $P[v]_{\mathcal{P}} = [v]_{\mathcal{E}}$. I will keep all of this implicit, so hopefully this won't be confusing to the reader.


2D Case

Let's first look at the 2D case to get an understanding of what I am talking about. ​ To start things off, let $\mathcal{E} = (\vec{e}_{1}, \vec{e}_{2})$ be the standard basis, and let $$ [e] = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}\qquad\text{ and }\qquad [g] = \begin{pmatrix} \alpha & \beta \\ \beta & \gamma \end{pmatrix} $$ be the matrix representations of inner-products $e$ and $g$ with respect to $\mathcal{E}$. By the spectral theorem for real symmetric matrices, there exist eigenvectors $$ \vec{p}_{1} = \begin{pmatrix} p_{11} \\ p_{21} \end{pmatrix} \qquad\text{ and }\qquad \vec{p}_{2} = \begin{pmatrix} p_{12} \\ p_{22} \end{pmatrix} $$ of $[g]$ that are $e$-orthogonal and $g$-orthogonal. Rescaling if necessary, we may assume $\vec{p}_{1}$ and $\vec{p}_{2}$ are $e$-normalized. Then $P = (\vec{p}_{1} \;\; \vec{p}_{2})$ is an orthogonal matrix that diagonalizes $[g]$. By transforming to the $\mathcal{P} = (\vec{p}_{1}, \vec{p}_{2})$ basis, and rewriting everything with respect to basis $\mathcal{P}$, we find ourselves in the $\beta = 0$ case from my original post. In particular, we have $$ [e] = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}\qquad\text{ and }\qquad [g] = \begin{pmatrix} \alpha & 0\\ 0 & \gamma \end{pmatrix} $$ where $\alpha, \gamma$ are not necessarily the same $\alpha, \gamma$ as above.

Let $V:\mathbb{R}^{2}\rightarrow\mathbb{R}^{2}$ be a smooth vector field such that $$ \nabla\times_{e} V = 0 \quad\text{ and }\quad \nabla\times_{g} V = 0. $$ This gives us \begin{align*} \partial_{1}V_{2} = \partial_{2}V_{1} \qquad\text{ and }\qquad \gamma\partial_{1}V_{2} = \alpha\partial_{2}V_{1}. \end{align*} Now assuming $g$ is not a scalar multiple of $e$, it must be the case that $\alpha\ne \gamma$. Also, both $\alpha, \gamma$ are nonzero. Some manipulations immediately give us $\partial_{2}V_{1} = 0$ and $\partial_{1}V_{2} = 0$. Hence $V_{1}(x, y) = f_{1}(x)$ and $V_{2}(x, y) = f_{2}(y)$ for some functions $f_{1}, f_{2}:\mathbb{R}\rightarrow\mathbb{R}$. Hence $V$ is of the form $$ V(x, y) = \begin{pmatrix} f_{1}(x) \\ f_{2}(y) \end{pmatrix}. $$ Converting back to a general basis, we find $$ V(\vec{x}) = f_{1}(\vec{x}\cdot\vec{p}_{1})\vec{p}_{1} + f_{2}(\vec{x}\cdot\vec{p}_{2})\vec{p}_{2}. $$


3D Case Involving Three Distinct Eigenvalues

Now we will apply our method to the 3D case. Since $g$ is not a scalar multiple of $e$, its eigenvalues cannot be all the same (if all three eigenvalues are the same then $g$ will be a scalar multiple of $e$). Unfortunately, there is still the possibility that two of the three eigenvalues are equal to each other and this introduces some complexity. To make things simple, we will first deal with the case where all three eigenvalues are distinct.

Let $e$ be the Euclidean inner-product and let $g$ be a real inner-product whose matrix representation has three distinct eigenvalues. By the same reasoning as above (and appealing to the spectral theorem when necessary), there is a basis for which the matrix representations the inner-products are $$ [e] = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} \quad\text{ and }\quad [g] = \begin{pmatrix} \alpha & 0 & 0 \\ 0 & \beta & 0 \\ 0 & 0 & \gamma \end{pmatrix}. $$ By hypothesis, $\alpha, \beta, \gamma$ are distinct.

Now if $V:\mathbb{R}^{3}\rightarrow\mathbb{R}^{3}$ is a smooth vector field such that $$ \nabla\times_{e} V = 0 \quad\text{ and }\quad \nabla\times_{g} V = 0, $$ then \begin{align*} \partial_{2}V_{3} - \partial_{3}V_{2} &= 0, \quad \partial_{3}V_{1} - \partial_{1}V_{3} = 0, \quad \partial_{1}V_{2} - \partial_{2}V_{1} = 0, \\[1.2ex] \gamma\partial_{2}V_{3} - \beta\partial_{3}V_{2} &= 0, \quad \alpha\partial_{3}V_{1} - \gamma\partial_{1}V_{3} = 0, \quad \beta\partial_{1}V_{2} - \alpha\partial_{2}V_{1} = 0. \end{align*} Remember that we are considering the case where $\alpha, \beta, \gamma$ are each distinct (and they are each positive by positive-definiteness of $g$). The above equations easily give us \begin{align*} \partial_{2}V_{3} = 0, \quad \partial_{3}V_{2} = 0, \quad \partial_{3}V_{1} = 0, \quad \partial_{1}V_{3} = 0, \quad \partial_{1}V_{2} = 0, \quad \partial_{2}V_{1} = 0. \end{align*} Thus, one can show that \begin{align*} V_{1}(x, y, z) = f_{1}(x), \quad V_{2}(x, y, z) = f_{2}(y), \quad V_{3}(x, y, z) = f_{3}(z) \end{align*} for some functions $f_{1}, f_{2}, f_{3}:\mathbb{R}\rightarrow\mathbb{R}$, and $V$ is of the form $$ V(x, y, z) = \begin{pmatrix} f_{1}(x) \\ f_{2}(y) \\ f_{3}(z) \end{pmatrix}. $$ Converting back to a general basis, we find $$ V(\vec{x}) = f_{1}(\vec{x}\cdot\vec{p}_{1})\vec{p}_{1} + f_{2}(\vec{x}\cdot\vec{p}_{2})\vec{p}_{2} + f_{3}(\vec{x}\cdot\vec{p}_{3})\vec{p}_{3}. $$ Not only does my conjecture hold here, but this outcome is slightly stronger than what I even conjectured!


3D Case Involving Two Equal Eigenvalues

As before, go to the basis for which the matrix representations the inner-products are $$ [e] = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} \quad\text{ and }\quad [g] = \begin{pmatrix} \alpha & 0 & 0 \\ 0 & \beta & 0 \\ 0 & 0 & \gamma \end{pmatrix}. $$ Let us assume $\alpha = \beta\ne \gamma$. The other cases are similar. If $V:\mathbb{R}^{3}\rightarrow\mathbb{R}^{3}$ is a smooth vector field such that $$ \nabla\times_{e} V = 0 \quad\text{ and }\quad \nabla\times_{g} V = 0, $$ then \begin{align*} \partial_{2}V_{3} - \partial_{3}V_{2} &= 0, \quad \partial_{3}V_{1} - \partial_{1}V_{3} = 0, \quad \partial_{1}V_{2} - \partial_{2}V_{1} = 0, \\[1.2ex] \gamma\partial_{2}V_{3} - \beta\partial_{3}V_{2} &= 0, \quad \alpha\partial_{3}V_{1} - \gamma\partial_{1}V_{3} = 0, \quad \beta\partial_{1}V_{2} - \alpha\partial_{2}V_{1} = 0. \end{align*} This time we can't quite draw as many conclusions as before, because $\alpha = \beta$. Nonetheless, one can show \begin{align*} \partial_{2}V_{3} = 0, \quad \partial_{3}V_{2} = 0, \quad \partial_{3}V_{1} = 0, \quad \partial_{1}V_{3} = 0, \quad \partial_{1}V_{2} = \partial_{2}V_{1}, \end{align*} and find that $V_{1}(x, y, z) = f_{1}(x, y)$, $V_{2}(x, y, z) = f_{2}(x, y)$, and $V_{3}(x, y, z) = f_{3}(z)$ for some functions $f_{1}, f_{2}:\mathbb{R}^{2}\rightarrow\mathbb{R}$ and $f_{3}:\mathbb{R}\rightarrow\mathbb{R}$. Hence $V$ is of the form $$ V(x, y, z) = \begin{pmatrix} f_{1}(x, y) \\ f_{2}(x, y) \\ f_{3}(z) \end{pmatrix}. $$ Converting back to a general basis, we find $$ V(\vec{x}) = f_{1}(\text{proj }\vec{x})\vec{p}_{1} + f_{2}(\text{proj }\vec{x})\vec{p}_{2} + f_{3}(\vec{x}\cdot\vec{p}_{3})\vec{p}_{3} $$ where $\text{proj }\vec{x}$ projects $\vec{x}$ to $\text{Span}(\vec{p}_{1}, \vec{p}_{2})$ according to $e$-orthogonality, and $\vec{p}_{1}, \vec{p}_{2}$ are associated with the same eigenvalue. This result is weaker than the original conjecture, but it is not surprising given the case we are dealing with.

The only other cases left are those with $\alpha = \gamma\ne\beta$ and $\alpha\ne\beta = \gamma$, which can be done by an easy adaptation of the case $\alpha=\beta\ne\gamma$, and their outcomes are similar to the outcome above.