Prove that there is at most one root between two stationary points

calculusreal-analysissolution-verification

I am doing some past papers for an upcoming exam and came across a more specific version of this:

Let $f$ be a real and differentiable function which is continuous on the interval $[A,B]$. Prove that if there are two stationary points at $x=a$ and $x=b$ where $A<a<b<B$, there is at most one root in the interval $[a,b]$.

I came up with this proof, but it's different from the one in the marking instructions. Is my proof still correct?

Proof: There are two cases: either the parity of $f$ is different at $a$ and $b$, or the parity is the same (note that consecutive stationary points cannot be equal, so $f$ cannot be $0$ (or any other number) at both $a$ and $b$).

If the parity is the same, then as $f$ approaches $b$ from $a$, it does not cross the $x$-axis, and therefore has no roots. If the parity is not the same, then $f$ must cross the $x$-axis as it approaches $b$ from $a$. It only does this once, otherwise there would be a stationary point between $a$ and $b$, a contradiction.

This is the proof in the marking instructions for the specific function $f(x) = x^3 – 3x + c$ where $c$ is an arbitrary constant, and you are asked to prove there's only one root in the interval $[0,1]$:

If there would be more than one root in the interval [0, 1], then by Rolle's theorem we would find a critical point of the function $ f(x) = x^3 −3x +c$ in between them (and hence necessarily in the interior of the interval, i.e. $(0, 1)$). But $f'(x) = 3x^2 − 3 = 3(x^2 − 1) = 3(x − 1)(x + 1)$, which only has roots at $−1$ and $1$. So there is no critical point of $f(x)$ in $(0, 1)$, and hence at most one root of $f(x)$ in $[0, 1]$.

Best Answer

Well, lets try to interpret what the problem wants us to do, then we'll look at your response and vice versa. I am going to be very slow, and will write in a narrative format. So read the thing in a slow, careful, and chronological manner to understand what I am trying to say.

Fixing and interpreting the problem statement

Let $f$ be a real and differentiable function which is continuous on the interval $[A,B]$. Prove that if there are two stationary points at $x=a$ and $x=b$ where $A<a<b<B$, there is at most one root in the interval $[a,b]$.

Turns out that the way the problem is framed, is very confusing and misleading for people. It problem tells us that $f$ is a real and differentiable, and then tells us that it is continuous on the interval $[A,B]$ (which is obvious since it is differentiable). So, if we frame it this way, it is pretty redundant. Here's a better way to frame the problem (which asks us the same thing);

Let $f$ be a continuous real function, which is differentiable over the domain $[A,B].$ Prove that if there are two stationary points at $x=a$ and $x=b$ where $A<a<b<B$, there is at most one root in the interval $[a,b]$.

In this statement the boundary points $A$ and $B$ are not redundant, because the function is specifically defined as a function differentiable only over the interval $[A,B].$ Now, given that there are two stationary points in the interval $[A,B],$ we need to show that There is at most $1$ root of the function $f$ in $[A,B].$ So this is what I can first make out of the given question.

Trying to understand your attempt

Now, in order to solve this, your have used casework on the parity (the positiveness or negativeness) of the values of $f$ at $a$ and $b.$ That's a good move to make, as we can immediately see that it helps us understand the exact positives of the values $f$ takes as the input value gradually increases. However, here I can see that you have made an assumption that there are no stationary points of the function $f$ between the values $a$ and $b.$

Have you made this assumption knowingly?

You might have, because the actual claim from the problem is in fact not true. Consider the function $f(x)=\sin(x)$ for example, with the stationary points $a=-5\pi$ and $b=5\pi.$ There are so many roots of $f$ between $a=-2\pi$ and $b=4\pi;$ $$f(-4\pi)=0$$ $$f(-2\pi)=0$$ $$f(0\pi)=0$$ $$f(2\pi)=0$$ $$f(4\pi)=0.$$

This is true because there are so many stationary points in the interval chosen. Notice that the original solution says "there is no critical point of $f(x)$ in $(0,1).$" So this is actually true for the polynomial function given in the original question and might not be true for your general function $f$. So we can modify your question a little bit to make the claim correct (btw this is the final modification);

Let $f$ be a continuous real function, which is differentiable over the domain $[A,B].$ Suppose that there are only two stationary points at $x=a$ and $x=b$ in the interval $[A,B],$ where $A<a<b<B.$ Then, prove that there is at most one root of $f$ in the interval $[a,b]$.

Now, when we have this problem, your proof actually works. However, your proof is very shortly written and it misses a few steps. For example, you have said; "If the parity is the same, then as $f$ approaches $b$ from $a$, it does not cross the $x$-axis, and therefore has no roots."

How exactly do you know that? If the parity of $f$ at both values is the same, then the values will be on the same side from $0.$ Now, it is still possible that the value $f(a)$ flips parity in the middle and again flips back to the original parity at $f(b).$ Here, we will indeed get atleast two roots. You need to show why this is not the case, possibly by using Rolle's Theorem as is used in the official solution.

Also, in the second case, you have directly stated;

"It only does this once, otherwise there would be a stationary point between $a$ and $b$."

Notice how both of your arguments have essentially spiraled down to the official solution which used the Rolle's Theorem? This means that your solution is basically the same as the official solution, just that it is considering two different cases, and applying Rolle's Theorem separately in both the cases.

This is a very common phenomenon; When we solve a problem in our own unique way, but later discover that it boils down to essentially the very official solution.

As a side-note, did you notice that a root is basically an intersection of the graph with the line $y=0$? It turns out that this phenomenon is true for any sort of line at all (hint: it'll use the Mean Value Theorem)! Try to see how that may work out.

I hope this helped :)