From Single-Variable Integration to Multivariable Integration: What Happened to the “Problem” of “Signed/Net Area”

definite integralsintegrationmultivariable-calculusreal-analysisvector analysis

I remember that, when learning single-variable integration, we learned that, if a function $f(x)$ is negative, then the definite integral produces the negative of the rectangle's area. This fact made it so that, unless the function $y = f(x)$ is always above the $x$-axis, the value calculated by the definite integral is the signed or net area, rather than the total area. This is because the definite integral will calculate the area between $y = f(x)$ and the $x$-axis, which will be negative for the parts between $y = f(x)$ and the $x$-axis when $y = f(x)$ is below the $x$-axis, and positive for the parts between $y = f(x)$ and the $x$-axis when $y = f(x)$ is above the $x$-axis, causing some cancellation between; thus, we have the signed or net area.

Area is always a nonnegative quantity. The Riemann sum approximations contain terms such as $f(c_k) \Delta x_k$ that give the area of a rectangle when $f(c_k)$ is positive. When $f(c_k)$ is negative, then the product $f(c_k) \Delta x_k$ is the negative of the rectangle’s area. When we add up such terms for a negative function, we get the negative of the area between the curve and the x-axis. If we then take the absolute value, we obtain the correct positive area.

(Hass 285)

Hass, Joel R., Christopher Heil, Maurice Weir. Thomas' Calculus, 14th Edition. Pearson.

If we wanted to find the total area, then we would have to break-up the function $y = f(x)$ based on whether it was below or above the $x$-axis, and then do many single-variable integrals for each broken-up region, taking the absolute value of those definite integrals that are below the $x$-axis:

To compute the area of the region bounded by the graph of a function $y = f(x)$ and the x-axis when the function takes on both positive and negative values, we must be careful to break up the interval $[a, b]$ into subintervals on which the function doesn’t change sign. Otherwise we might get cancelation between positive and negative signed areas, leading to an incorrect total. The correct total area is obtained by adding the absolute value of the definite integral over each subinterval where $f(x)$ does not change sign. The term “area” will be taken to mean this total area.

(Hass 285)

Hass, Joel R., Christopher Heil, Maurice Weir. Thomas' Calculus, 14th Edition. Pearson.

I then went on to learn multivariable integrals (double, triple) and related concepts, such as different parameterisations/transformations (cylindrical coordinates, spherical coordinates), Green's theorem, Stoke's theorem, the Divergence theorem, etc.

In learning these more advanced concepts, this issue with having negative function values was never again mentioned. However, this has been bugging me for a while now, since, as I understand it, this would still be a problem in the multivariable case; but, unlike the single-variable case, there has been no discussion about it or "how to deal with it", as there was with signed/net area.

I would greatly appreciate it if people could please take the time to explain how the aforementioned "problem" of negative function values in the case of single-variable integration comes into play when we're dealing with these more advanced concepts and multivariable integration.

Best Answer

The reason is primarily one of focus: you can use integrals to calculate areas and volumes, and you have the same situation that $\int f$ produces a number that includes area below the axis as negative (as it must, since the integral is linear: $\int (-f) = -\int f$). But:

  1. Most of the time in higher dimensions, things are set up so that if you are computing a volume, it's a relatively simple shape where the areas are computed by some slicing technique to turn it into an iterated integral. Unlike the one-dimensional case, which graph is "on top" normally doesn't change in these examples (and if it did, you'd have to apply the same procedure to split into cases); it is this question of "on-topness" that gives "signed" area in the first place, and why $\int \lvert f \rvert$ gives the actual area between $y=f(x)$ and $y=0$: it forces $f$ to always be the top function, if you will.

  2. Even more frequently, integrals are not used for computing volumes, but other quantities like averages and force and work and flux, all of which want to use integration as a linear operation rather than just an area-measuring gadget. The linearity is almost universally regarded as more essential to the definition than the area-measuring (one can find mathematicians raving about how much they love expectation being linear, which is an example of this phenomenon), so $\int f$ is much more important mathematically than $\int \lvert f \rvert$. Of course, $\int \lvert f \rvert$ still measures the (unsigned) area under a curve, but this is a tiny corner of the integration universe. As you move into more advanced areas, you simply don't calculate volumes as often as you use the other applications of the integral.

Of course in higher dimensions you also have the problem of orientation in the differential: if you put your integration variables in a different order when changing variables, the sign of the Jacobian determinant has to be corrected (you may have been told to simply take the absolute value: while this works, it misses why the problem occurs in the first place). Such disparity is best explained using differential forms, which are essentially constructed to be "things you can integrate", but have this funny orientation property build into their definition (it also allows one to stop worrying about finding outward-pointing normals all the time). The one-dimensional analogue of this is the convention that $\int_a^b f = -\int_b^a f$.

Related Question