Is it possible to motivate higher differential forms without integration

differential-topologymotivation

You have been teaching Dennis and Inez math using the Moore method for their entire lives, and you're currently deep into topology class.

You've convinced them that topological manifolds aren't quite the right object of study because you "can't do calculus on them," and you've managed to motivate the definition of a smooth manifold.

Dennis and Inez take your motivation about "doing calculus" seriously. They pose to you that they asked themselves how to take the derivative of a function $f: M \to \mathbb{R}$. They figured out independently that such a thing couldn't meaningfully be another smooth function. Instead, it has to say how the function is changing in the direction of a given vector field.

You are thrilled at their work: they've independently invented $d$ and the cotangent bundle, and they arrive at the correct formal definition of both with little prodding.

You want to point them in the direction of generalization, so you ask them to consider how to extend $d$ to higher exterior powers of the cotangent bundle. You get blank stares. Why should they think about higher tensor powers of the cotangent bundle at all? Why the alternating tensors, in particular? Yes, they know what that means (you led them to it in their linear algebra course), but the cotangent bundle just naturally showed up when they went thinking about derivatives, and the alternating tensors don't.

Oops! Now's the time to confess that your students never had a traditional 3-dimensional vector calculus course culminating in Stokes's theorem. Actually, it's worse: they have never heard of integrals. You see, you were torn about whether to start with the Lebesgue theory or spend time building the more intuitive Riemann integral, so you just skipped from the first half of Calc One to a topics course on higher topos theory. Lesson learned.

So, how do you teach them? (Alternatively, answer this question in the negative by proving that Dennis and Inez will always invent integration if you make them think about higher exterior derivatives enough.)

Best Answer

$\def\RR{\mathbb{R}}$I have in fact tried to tackle this problem when I teach higher differential forms. Here are some of the things I try.

Before we do differential forms or manifolds For functions $f : \RR^n \to \RR^p$, introduce $Df$ and prove the multivariate change of variables formula.

Let $f$ be a function on $\RR^n$. Define the Hessian of $f$, $D^2 f$, to be the matrix of second partials $\tfrac{\partial^2 f}{(\partial x_i) (\partial x_j)}$. Prove the multivariate second derivative test: If $Df=0$ at $c$ and $D^2 f$ is positive definite, then $c$ is a local minimum for $f$. (This isn't logically necessary for what follows, but helps show this is a useful concept.)

Show that, if $c$ is a critical point, then the Hessian has a simple chain rule like formula. Show that, if $c$ is not a critical point, the formula for changing variables inside the Hessian is a mess.

Also, during this pre-manifold time, I talk about the curl of a vector field on $\RR^2$ and prove Stokes' theorem for rectangles; but you asked me not to mention integration.

Spend a bunch of time working with and getting used to $1$-forms, still in $\RR^n$ Note that the multivariate chain rule means that $df$ is well defined. Note that $D^2 f$ is well defined as a quadratic form on the tangent space at critical points, but is not a well defined quadratic form on the tangent space in general. All of this is just the computations from the pre-manifold discussion, now placed in a more sophisticated context. This means that we don't have a coordinate independent notion of second derivative.

If we can't take the second derivative of a function, can we at least take the first derivative of a $1$-form? In coordinates, if we have $\omega = \sum g_i d x_i$, we can form the matrix of partials $\tfrac{\partial g_i}{\partial x_j}$. Could this be coordinate independent in some sense?

A problem! If $\omega = df$, then $\tfrac{\partial g_i}{\partial x_j}$ is the Hessian, which we just saw was bad. Let's try skew symmetrizing this matrix, to give $\tfrac{\partial g_i}{\partial x_j} - \tfrac{\partial g_j}{\partial x_i}$. This throws away the Hessian: If $\omega = df$, we just get $0$. Is what is left over any better?

A miracle (which the students are assigned to grind out by brute force): This gives a well defined skew-symmetric form on the tangent space. Define a $2$-form on $\RR^n$, and explain that we have just constructed $d: \Omega^1 \to \Omega^2$ and shown that it is well defined. In particular, when $n=2$, we have just shown that curl is a well-defined skew-symmetric form.