Confusion on Rudin’s Principles of Mathematical Analysis, Ex 4.1

real-analysis

I am trying to solve the following exercise in Rudin's "Rudin's Principles of Mathematical Analysis" book: (Ex 4.1)

Suppose $f$ is a real function defined on $R^1$ which satisfies
$$\lim_{h\to 0}[f(x+h)-f(x-h)]=0$$
for every $x\in R^1$. Does this imply that $f$ is continuous?

The answer to this question is simply no, and it can be proved by using the function $f(x) = 1$ if $x\in \mathbb{Z}$ and $f(x) =0$, otherwise.

However, I am a little bit confused by this result, since I obtain the contrary by using the definitions of limits and continuity. In particular, I have the following derivation:

Define the function $$g(h) := f(x+h) – f(x-h)$$ for a fixed x. Then the hypothesis implies that $$\lim_{h\to 0}g(h)=0.$$ By using the definition of the limit, we have that $\forall \varepsilon >0$, $\exists \delta>0$, such that $|g(h)| < \varepsilon$ and $|h|< \delta/2$.

Now let $p, q \in \mathbb{R}$. Assume $p < q$, without loss of generality. Define $h := \frac{q-p}{2}$, $x := \frac{q+p}{2}$. Fix $\varepsilon >0$. Then, for $$|f(q)- f(p)| = |f(x+h)-f(x-h)| < \varepsilon,$$
we know that
$$|q-p| = |x+h-x+h| = 2 |h| < \delta.$$ Hence, $f$ is continuous.

My question is: in which part of this derivation I am making a mistake?

Please note that there have been other questions regarding this exercise, for instance see this. However, I have a different concern, whose solution I think could be useful for other people.

Best Answer

The error lies in “Hence, $f$ is continuous.” There is nothing before that assertion that justifies it. Proving that $f$ is continuous means proving that, for every $x\in\mathbb R$ and every $\varepsilon>0$, there is a $\delta>0$ such that $\lvert h\rvert<\delta\implies\bigl\lvert f(x+h)-f(x)\bigr\rvert<\varepsilon$, and you did not prove that.

Related Question