Last step in the proof of second derivative test

continuityderivativesreal-analysis

I came across a problem studying the proof of the Second Derivative Test theorem from Spivak's Calculus (Chapter 11, Theorem 5, p199, 3rd edition):

Suppose $\operatorname{f}^\prime(a) = 0$. If $\operatorname{f}^{\prime\prime}(a) > 0$, then $\operatorname{f}$ has a local minimum at $a$.

I'm able to follow the proof until the very last point, where the author concludes after observing the sign of $\operatorname{f}^\prime$ around $a$, that since

$\operatorname{f}$ is increasing in some interval to the right of $a$ and decreasing in some interval to the left of $a$, […] f has a local minimum at a.

The way I understood this, is that $\operatorname{f}$ decreases on $(a-h, a)$ and increases on $(a, a+h)$ for some $h > 0$, but there is nothing said about the point $a$ itself. I managed to come up with two proofs, none of them seems straightforward enough just to be omitted from the book, so I have the impression that I'm missing a point.

  1. For any $b \in (a-\delta, a)$ consider $c = \frac{b + a}2$. $\operatorname{f}$ is decreasing, so $ \epsilon = \frac{\operatorname{f}(b) -\operatorname{f}(c)}{2} > 0 $. From the (left-)continuity of
    $\operatorname{f}$ at $a$ for this $\epsilon$ there is some $\delta > 0$ such that $\forall x: 0 \le a – x \lt \delta : \left| \operatorname{f}(x) – \operatorname{f}(a) \right| < \epsilon \Leftrightarrow \operatorname{f}(x) – \epsilon < \operatorname{f}(a) < \operatorname{f}(x) + \epsilon $.
    For any $x$ that satisfies both $0 \le a – x \lt \delta \text{ and } c < x $, we get from the previous inequality together with the fact that f is decreasing: $\operatorname{f}(a) < \operatorname{f}(x) + \epsilon < \operatorname{f}(c) + \epsilon < \operatorname{f}(c) + \frac{\operatorname{f}(b) -\operatorname{f}(c)}{2} < \frac{\operatorname{f}(b) +\operatorname{f}(c)}{2} < \operatorname{f}(b)$.

    The proof that $\operatorname{f}(a) < \operatorname{f}(b)$ for all $b \in (a, a+\delta)$ is similar.

  2. I know, that if $\operatorname{f}$ was continuous on some $(a-\delta, a+\delta)$ interval, then there would be a closed interval inside this open interval (e.g $ [a-\frac{\delta}{2}, a+\frac{\delta}{2}]$ ), where f takes on its minimum value at some point $b$. By way of contradiction we can see that $b = a$, otherwise $\operatorname{f}(\frac{a + b}{2}) < \operatorname{f}(b)$ would contradict the conclusion that $b$ is a minimum place.
    Since $\exists \lim_{h\to 0} \frac{\operatorname{f}^\prime(a + h) – \operatorname{f}^\prime(a)}{h} = \operatorname{f}^{\prime\prime}(a)$, $\operatorname{f}^\prime(x)$ must exist $\forall x \in (a-\delta, a+\delta)$ for some small $\delta > 0$, thus f is continuous on $(a-\delta, a+\delta)$, thus the first part of this proof holds.

My questions:

  1. Are these proofs correct?
  2. Is there a more direct way to conclude from the last part of the proof provided in the book that $\operatorname{f}$ has a local minimum at $a$?

Best Answer

Bravo on worrying about the details here.

Your proofs seem correct, though for the second one it might be better to treat the different cases of $b<a$ or $b>a$ separately.

Here are some equivalent arguments that you might find more intuitive:

We actually know that $f$ is decreasing on some interval to the left $(a-\delta, a]$ where $a$ is included though Spivak doesn't really say so.

So, for all $x$ in the open interval $(a-\delta, a)$, $f(x) > f(a)$.

Why?

Suppose instead $f(x_0) \leq f(a)$ for some $x_0$ in this interval.

Since $f$ is decreasing, there must be some $x_1$ in $(x_0, a)$ such that $f(x_1) < f(x_0) \leq f(a)$.

But then continuity of $f$ and the IVT tell us there must be some $x_2$ with $$x_1 < x_2 < a \text{ and } f(x_1) < f(x_2) < f(a),$$ but this contradicts $f$ decreasing to the left of $a$. (For example, there must be some $x_2$ s.t. $f(x_2) = \frac {f(x_1) + f(a)}{2}$)

As such, we can safely say that $f$ takes on its minimum value on $(a-\delta, a]$ at $a$.

Similar arguments can apply to the right side interval.

Edit We can generalize this idea as follows:

Theorem: If $f$ is continuous on $[a,b]$ and decreasing on $(a,b)$ then $f$ takes on its minimum value on $[a,b]$ at $b$ (and its maximum value at $a$).

The proof is very similar to what we did above. Furthermore, a nearly identical theorem applies to $f$ increasing.

Note: this result doesn't depend on $f$ being differentiable.

Edit another approach, similar to your second proof: we know there is some interval $[a-\delta, a+\delta]$ with $f'(x) < 0$ for all $x$ in the interval left of $a$, and $f'(x) > 0$ for all $x$ in the interval to the right of $a$.

We know $f$ takes on a minimum value at some $y$ in $[a-\delta, a+\delta]$

$f'(a-\delta) < 0$ indicates that $f(a-\delta) > f(x)$ for all $x$ just right of $a-\delta$.

Similar considerations apply to the right endpoint, and taken together we see that neither endpoint can be the minimum of $f$ on $[a-\delta, a+\delta]$. Thus, $f$ must have a local minimum point somewhere in $(a-\delta, a+\delta)$.

Since $f$ is differentiable and has a local minimum somewhere in $(a-\delta, a+\delta)$, $f' = 0$ at this minimum. But $a$ is the only point with zero derivative on $(a-\delta, a+\delta)$, so $a$ must be the minimum point.

Edit 2 A third take, which is a slight modification of Spivak's Corollary 3:

Theorem: If $f$ is continuous on $[a,b]$ with $f'(x) < 0$ for all $x$ in $(a,b)$, $f$ is [strictly] decreasing on $[a,b]$.

Proof:

If $a \leq x < y \leq b$, from MVT we have $$ \frac{f(y) - f(x)}{y-x} = f'(x_0) \text{ for some $x_0$ in $(a,b)$},$$

but $f'(x_0) < 0$ so we must have $f(y) < f(x)$.

Notice the above theorem applies even to the endpoints $a$ and $b$ even though we don't know anything about the value of $f'$ at those points.

We can apply this theorem (and a nearly identical version for $f' > 0$) to the second derivative scenario to see that $a$ must be a minimum point.