Let $f:(a,b) \to \mathbb R$ differentiable s.t $f^{‘}(x) \ge 0$ in $(a,b)$ and $f^{‘} = 0$ only in one coordinate. Prove $f$ is strictly increasing.

calculuscontinuityderivativesreal-analysissolution-verification

I actually looked pretty hard for this proof and couldn't find it. Also I feel like my proof isn't very good, I lay a lot of basic groundwork but I feel the logic and the deduction is far from perfect and so I'd love some feedback.

Here it goes:

We have $f^{'}(x) \ge 0$ for all $x \in (a,b)$ and $f^{'}(x) = 0$ only once.

Assume $f^{'}(x) = 0$ for some $x_0 \in (a,b)$ s.t $f^{'}(x_0) = 0$.

Let us mark $I_1 = (a,x_0)$ and $I_2 = (x_0,b)$. We have $f^{'}(x) \gt 0$ for all $x \in I_1$ or $I_2$ and from theorem we saw in the lecture this implies $f$ is injective in $I_1$ and $I_2$.

From datum $f(x)$ is differentiable in $(a,b)$ we have $f(x)$ continuous in $(a,b)$.

We know the image of a function over some interval is an interval by itself and so let's mark $f(I_1) = I_3$ and $f(I_2) = I_4$.

This is the part I feel like the proof is losing ground

From the continuity of $f$ we have $f(x_0) \notin I_3$ or $I_4$.

Notice $\forall x \in I_1$ or $I_2$ we have $f^{'}(x) \gt 0$ which implies $f(x)$ is strictly increasing in $I_1$ and $I_2$.

Also we have $lim_{x \to x_0^{+}}f(x) = lim_{x \to x_0^{-}}f(x)$ and so for all $x_1 \in I_1$ and $x_2 \in I_2$ $f(x_1) \neq f(x_2)$.

All in all we can deduce $f$ is injective in $(a,b) \implies$ $f(x)$ is strictly increasing in $(a,b)$ as needed.

Best Answer

The normal way to prove theorems like this is by use of the mean value theorem, with some argument similar to the following. First, we note that for $f$ to be strictly increasing on $(a,b)$ is that whenever we have $x, y \in (a,b)$ where $x < y$, then $f(x) < f(y)$. Let for this sake $x, y \in (a,b), x < y$. We will now consider the interval $[x,y]$.

Of course, $f$ is continuous on $[x,y]$ and differentiable on $(x,y)$. Thus there exists some point $c \in (x,y)$ such that $f'(c) = \frac{f(y) - f(x)}{y - x}$. If this point has $f'(c)$ greater than zero, it follows $f(y) - f(x) > 0 \Leftrightarrow f(y) > f(x)$. All that is left to consider then is that this is the exclusive point at which the derivative is exactly zero (in essence, by contradiction). For this case, we consider the partition of the interval into $[x, c]$, $[c, y]$. As we have then established the derivative is positive on the resultant open intervals then, it is $f(c) > f(x)$ and $f(y) > f(c)$, and thus $f(y) > f(x)$, the desired result.

I haven't read over the entirety of your proof, but it is perhaps evoking unnecessary details for proving this.