Understanding The Fundamental Theorem of Calculus, Part 2

calculusderivativesintegration

I'm trying to understand the difference between 1st and 2nd parts of The Fundamental Theorem of Calculus.
Let's start from the definitions:
First part says that if $f$ is continuous on $[a,b]$, then the function $g$ defined by $$g(x) = \int_a^x f(t) dt, \ a <= x <=b $$
is continuous on $[a,b]$ and differentiable on $(a,b)$, and $$\frac{d}{dx}g(x) = f(x)$$

Second part says that:
If $f$ is continuous on $[a,b]$, then $$\int_a^b f(x) dx = F(b) – F(a)$$
where $F$ is any antiderivative of $f$, that is, a function $F$ such that $$\frac{d}{dx}F(x) = f(x)$$

So the part I don't understand is why in the first equation we have integral of the function expressed as a SINGLE antiderivative, when in the second equation we have a difference of antiderivatives? What is the connection between $F(x)$ and $g(x)$?

Best Answer

I like to understand these theorems as kind of a 1-2 punch, where the first theorem sets things up, and the second theorem knocks them down (where "knocking things down" = "evaluating definite integrals".)

So the First Theorem defines a function $g(x)$ more-or less explicitly: What's, say, $g(7)$? Well, (assuming $7$ is between $a$ and $b$), it is $\int_a^7 f(t)dt $. Okay, how do you find that? Well, you've got to construct a bunch of Riemann sums, and then prove that they converge to a limit as the mesh gets smaller, and then that limit is the value of $g$ at $7$.

$g$ is explicitly defined, but it's a real pain in the ass to evaluate $g$ at even one point - a Riemann sum and a limit each time. But the First Theorem does give us some information about how $g$ behaves, and that's going to help us in proving the Second Theorem. Also notice that one of the things that's true about $g$, which appears to be to obvious to mention, is that $g(a) = 0$.

In the Second Theorem, we have $F(x)$. How is $F$ defined in terms of $f$? It's not, at least not like $g$ was. It can be any wild-ass function, except it does have to pass a test: $F$'s derivative has to be equal to $f$, at each point in $[a,b]$. The Second Theorem tells us if we have such an $F$, then (and here's where the sun breaks through the clouds and a chorus of angels starts singing), we can evaluate integrals of $f$ by just evaluating $F$ at two points, and subtracting. And at this point we're absolute whizzes at finding the derivatives of function.

This is amazing - no Riemann sums, no limits, just find an $F$ that passes the test, and you can evaluate definite integrals with two function evaluations and a subtraction. This is what is going let us go forward and start actually evaluating integrals. If we didn't have the Second Theorem, our calculus class might end right here, with your prof saying "Well, that's how an integral is defined, but there aren't many we can evaluate, and it's a pain to try to evaluate new ones, as we have to look for neat, tricky patterns in the Riemann sums each time." Instead we just throw out a guess, check that it has the right derivative, and we are done.

(If the Second Theorem is the meat of the matter you might ask why we even bothered with the First Theorem. Well, it is used in proving the second theorem. As to why it's pulled out separately, and isn't just a step in the Second Theorem, I'm not sure, but it's pretty established as a separate theorem by now, so it might just be historical tradition.)

As to your question about the relation between the $g$ of the First Theorem and the $F$ of the Second Theorem: Note that $F$ isn't uniquely defined; we know that we can add or subtract a constant to any existing $F$ that passes the 2nd thm's test, and get another function that also passes the same test. So the 2nd thm actually gives us a whole family of $F(x)$ functions that will work, and $g(x)$ is one of them. In fact, it is the unique one that has the value $0$ at $a$.

So if you look back at the equation in the 2nd thm, and imagine that you also know that for your $F$, $F(a) = 0$, notice that the equation

$$\int_a^b f(x) dx = F(b) - F(a)$$

becomes

$$\int_a^b f(x) dx = F(b) - 0$$

which is the same as the equation from the 1st thm

$$g(x) = \int_a^x f(t) dt$$

with just a bit of renaming and re-arranging.


If you now understand this, and in particular understand the difference between

  • a definite integral, defined via limits of Reimann sums, and written $ \int_a^b f(t)dt $, and
  • an indefinite integral, defined by the "has the correct derivative" test, and written $ \int f(t) dt$,

then you might enjoy my favorite statement of the (two) Fundamental Theorems of Calculus, which is

$$ \int_a^b f(t) dt = \left. \int f(t)dt \, \right|_a^b$$

It looks like almost nothing, just shifting two variables over from an long 'S' onto a vertical line, but the notation hides, and encapsulates, all the details you've just worked through, and is incredibly powerful, as I hope I've convinced you.