[Math] Differential forms turn infinitesimal stuff rigorous

differential-formsintuition

First of all, I know that infinitesimals are not well defined in standard analysis and they have rigorously nothing to do with differential forms. My doubt is on the intuition between one relationship that seems to exist: it seems that differential forms turn the idea of infinitesimal widely used by Physicists into rigorous stuff.

Why I say that? Well, I'll expose some of the points I have noticed.

  1. The total differential. In classical language, given $f : \mathbb{R}^n\to \mathbb{R}$ we can consider the infinitesimal change $df$ when we move from a point $(x^1,\dots,x^n)$ to a neighbouring point $(x^1+dx^1,\dots,x^n+dx^n)$ as being
    $$df = \sum_{i} \dfrac{\partial f}{\partial x^i}(x) dx^i$$
    in differential forms we have $df$ the exterior derivative that gives changes in $f$ when we give vectors to it. So $df$ and $dx^i$ became "measuring objects" in some sense.

  2. Integration along curves. In classical language, if a curve is given parametrized with $x = x(t)$, $y=y(t)$ and $z=z(t)$, then we compute the integral of a vector field $F$ as follows: we consider $dl = (dx,dy,dz)$ an infinitesimal displacement and do the calculation
    $$\int_\Gamma F\cdot dl = \int_\Gamma F^1dx+F^2dy+F^3 dz = \int_a^b F^1 x'(t)dt+F^2y'(t)dt+F^3z'(t)dt$$
    This obviously is related to the pullback. Considering $\gamma$ the parametrization and $F$ a one-form, then
    $$\gamma^\ast F = \gamma^\ast\left(\sum F_i dx^i\right)= \sum_i F_i\circ \gamma d(x^i\circ \gamma)=(\sum_i F_i\circ \gamma )(\gamma^i)'dt$$
    and the integral is the same as before if we define $\int_\gamma F = \int_a^b \gamma^\ast F$

  3. A differential form picks one $k$ alternating multilinear map in each tangent space. In that case, we know we can represent those objects by two $(n-k)$ planes in the tangent space. Intuitively, we can pick just small pieces of those planes, and so at a point, a differential form could be thought intuitively as an infinitesimal $(n-k)$ piece of plane.

Considering all of that it seems that differential forms really exists to make stuff with infinitesimals rigorous. But what's the intuition between this conection between differential forms and infinitesimals?

Best Answer

In my opinion, a lot of these relationships are suggested by abusive notation, abuses that hide what's really going on.

Don't get me wrong: some abuses of notation are harmless, or at the least, they help people get going on doing calculations. But they should still be understood to the fullest degree for those who wish to go beyond merely doing calculations.

I'll give an example: consider the relationship,

$$\frac{dx}{dy} = \frac{1}{\frac{dy}{dx}}$$

You probably know that differentials shouldn't really be divided, that this notation is really only suggestive, and while what it says is true by the inverse function theorem, it does so in a voodoo-like way that doesn't stand up to closer inspection, raising more questions than answers.

Of course, there's a totally reasonable way to phrase this notion: as I said, it's the inverse function theorem. Given a function $f$ on a vector $x$, we have the Jacobian $J_f$, and we know that

$$J_{f^{-1},f(x)} = J_{f,x}^{-1}$$

Which is a totally rigorous, though perhaps less obviously useful, statement.

(You might be thinking that nonstandard analysis could be useful here. Perhaps it would be, but my point is a bit larger: to understand and feel comfortable with the statement, you need to either take for granted that it stands in for something else, or accept that you need more math to understand it the way it's written.)


So, how does this relate to differentials and differential forms?

Well, mostly through the use of $d$ to denote the exterior derivative. Changing this symbol reveals how manifestly nonsensical some apparent relationships are.

For the purposes of this answer, I'll denote the exterior derivative by $\nabla$. This is reasonably familiar to students of vector calculus in 3d, and most of the results can be used directly from there.

Let's address your point (1), the total differential. It would be written as,

$$\nabla f = (\partial_i f) \nabla x^i$$

Again, recognizing the connection between the exterior derivative and the gradient from vector calculus, you should realize that the $\nabla x^i$ are nothing more than a set of basis vectors (more exactly, basis covectors), and all this does is decompose the gradient of $f$ into some coordinate directions. There is no explicit connection here between the gradient and differentials.


Let's talk about point (2), integrals around curves.

This is a common misconception from people who work with differential forms. I'll point out that the quantity $r'(t) = (x', y', z')(t)$ is manifestly a tangent vector. It literally points tangent to the curve that is the domain of integration, and fundamentally, it obeys quite different transformation laws than any form.

Moreover, if $F$ is a one-form, then it should be written

$$F = F_x \nabla x + F_y \nabla y + F_z \nabla z$$

If all the supposed $dx$'s are coming from the form, then what's coming from the $dl$? As argued above, what comes with $dl$ is not a set of basis forms but a vector, the tangent vector to the curve. Writing this vector $\ell'(t) = x' \partial_x r + y' \partial_y r + z' \partial_z r$ (where $r$ is a vector), we get for the dot product,

$$\int F \cdot dl = \int (F_x \circ l)(t) x'(t) \nabla x \cdot \partial_x r + \ldots \, dt$$

Of course, $\nabla x \cdot \partial_x r = 1$ by definition--otherwise, the basis forms would not be dual to the basis vectors. What would happen if we wrote the basis forms with the usual $dx$ notation?

$$\int F \cdot dl = \int (F_x \circ l)(t) x'(t) dx(\partial_x r) + \ldots \, dt$$

On its face, this looks like gobbledy-gook. Even if you had the presence of mind to distinguish between a basis form $dx$ and a differential denoting the variable of integration $dt$, it would be challenging to reconcile how these two notions should coexist in the same integral. I know I've met one person on this very site who suggested that no one should ever work with $dx$ and the like because you're just going to pull back anyway, so only $dt$ should be viewed as a differential form on this curve. That's...certainly one way of looking at things. To me, that comes at a high price of not being able to look at things geometrically. Let me explain:

What are you doing when you pull back a form in an integral like this? You're making it so the tangent vector in the target space has constant direction and magnitude (since you're pulling back to a 1d vector space, the image of the tangent vector is just the trivial unit vector). This is what's commonly done for form integrals, because then all your complexity is in the form, and in the Jacobian transforming that form, rather than in considering the components of the tangent vector. For this reason, the tangent vector is sometimes forgotten or neglected, since once you've pulled back, it's some trivial constant vector that will just be eaten by the form anyway. All that remains to be done is to set some convention for what direction it should be: positive or negative.

Anyway, you could call a basis form on that space by name, and perhaps some people would call it $dt$. If that abstract way of thinking works for you, do what you feel is best.


Finally, let's talk about point (3): this is more of a geometric interpretation question, and it's not unique to differential forms. Should a vector field be viewed as small, directed lines at every point? This is certainly behind the notion of field lines, which are commonly used for electric fields. I'm not sure I could say one (vectors) is more differential than the other (forms). Both involve orientations and magnitudes. In the end, I have to offer the same perspective as I would for vectors: does it make sense to think of a vector as a small piece of a line? If so, how would you decide that differentials are associated with forms instead of vectors? If not, how is this different from what you've done with forms?


Let me not digress for too long. There's a reason the notation for differential forms has stuck around as long as it has: it's enormously suggestive, and for dealing with unfamiliar concepts, suggestive notation is powerful. But like with the inverse function theorem, I submit that that notation is merely suggestive, full of shortcuts and sleight of hand. I do not think differential forms turn infinitesimals rigorous--far from it, I think that a far stronger relationship between forms and these differentials in integrals is suggested by the notation in ways that it shouldn't be.