Uniform Continuity – Why ? Can’t Depend on x in Definition

continuitydefinitionreal-analysis

I'm told that a function defined on an interval $[a,b]$ or $(a,b)$ is uniformly continuous if for each $\epsilon\gt 0$ there exists a $\delta\gt 0$ such that $|x-t|\lt \delta$ implies that $|f(x)-f(t)|\lt \epsilon$. Then it gives a little note saying that $\delta$ cannot depend on $x$, it can only depend on $\epsilon$.

With ordinary continuity, the $\delta$ can depend on both $x$ and $\epsilon$. I'm just a little lost on why $|x-t|\lt \delta$ implies $|f(x)-f(t)|\lt \epsilon$, and how $\delta$ can't depend on $x$ but only $\epsilon$.

Best Answer

Here's a picture that might help. A visual way of understanding $\delta$-$\epsilon$ arguments is by starting with a $\delta$-sized area in the domain, projecting up to the function, and then back onto an $\epsilon$-sized area in the range.

enter image description here

With the function $f(x) = x$, there is a bounded ratio between the size of the area I feed in and the size of the area I get out. Not so with $f(x) = x^2$! Look how I feed in small areas and get out large areas for large values of $x$. This is why we say that $f(x) = x$ is uniformly continuous, but $f(x) = x^2$ is not uniformly continuous on $\mathbb{R}$. There's no way to globally (i.e. independent of $x$) control the size of the image of $f(x) = x^2$ by controlling the size of the domain.

Related Question