Is the complex function $f(z)=\exp(-|z|)$ uniformly continuous on $\mathbb{C}$

complex-analysiscontinuityexponential functionuniform-continuity

Is the complex function $f(z)=\exp(-|z|)$ uniformly continuous on $\mathbb{C}$?

The complex exponential function $\exp(z)$ is defined as
$$
\exp(z) := \lim_{n\to \infty} \Bigl(1+ \frac{z}{n}\Bigr)^n
$$

I tried solving this using the definition of the uniform continuity and got this far
\begin{align}
|f(w)-f(z)| &= \Biggl|\lim_{n\to \infty} \biggl(1-\frac{|w|}{n}\biggr)^n – \lim_{n\to \infty} \biggl(1-\frac{|z|}{n}\biggr)^n\Biggr| \\[0.5ex]
&=\Biggl|\lim_{n\mapsto \infty} \biggl(\frac{|z|-|w|}{n}\biggr)^n\Biggr|
\end{align}

What should I do next? Is this function even uniformly continuous?

Best Answer

First off, I believe that the right thinking and intuitive base for these concepts helps significantly in finding the core approach - see my post on the subject here:

What is the intuition behind uniform continuity?

An intuitive understanding of uniform continuity - and continuity concepts more generally - is, as I've pointed out, related to the notion of "approximability": the ability to approximate what the output value of a function will be for an input you cannot obtain exactly, but can only obtain approximately - something which is necessary in order for approximate empirical measurements, and devices like electronic calculators which have limited precision, to be useful. Continuity basically asserts that, if you measure the input precisely enough, you can be assured of a desired accuracy in the output.

The desired accuracy in the output is $\epsilon$: think of it as a "tolerance". If I choose, say, $\epsilon = 0.001$, then what I'm asking is that I want to know the output value of the function to within $\pm 0.001$ (think about the confidence or uncertainty intervals in scientific measurements) of the true output value. This is what

$$|f(x_\mathrm{meas}) - f(x_\mathrm{true})| < \epsilon$$

means (where I've used somewhat more suggestive alternate notations for the values in question): the value of $f$ evaluated at the measured approximating value $x_\mathrm{meas}$ of the true, "desired" input $x_\mathrm{true}$, is not different from that true by any more than $\epsilon$. Indeed, even more suggestively, we may write

$$f(x_\mathrm{meas}) \in \left(f(x_\mathrm{true}) - \epsilon, f(x_\mathrm{true}) + \epsilon\right)$$

i.e. that $f(x_\mathrm{meas})$ is literally within the $\pm \epsilon$ uncertainty interval about $f(x_\mathrm{true})$. The $\delta$, then, is how accurately I need to measure the input: again note that we can write

$$|x_\mathrm{meas} - x_\mathrm{true}| < \delta$$

which has the same interpretation.

Now the key here is that in ordinary continuity, the value of $\delta$ needed to get $\epsilon$ may vary from one point $x_\mathrm{true}$ to another: if we know that, say, $\delta = 0.005$ suffices to get our $\pm 0.001$ level of tolerance from before when trying the input $x_\mathrm{true} = \pi$, for example, i.e. take $x_\mathrm{meas} = 3.141$, and the $f$ is merely continuous, we cannot be assured that that same $\delta$ will work at, say, $x = \pi^5$. That is, taking $x_\mathrm{meas} = 306.020$, which is within 0.005, may not give us $\pm 0.001$ tolerance on the output!

Clearly, this is a less than ideal state of affairs. What uniform continuity, then, tells us is that we are in a better state of affairs than this: if $f$ is uniformly continuous, then for each tolerance $\epsilon$, the same $\delta$ will work at every input! In other words, in the above example, it doesn't matter if we're trying to figure the value at $\pi$, $\pi^5$, or anything else - $\delta = 0.005$ will always suffice to get that desired $\pm 0.001$ accuracy.

So, now, how does this work in the case you are after? Well, it's simple: if you want to figure out how to prove the statement in question, you can begin by empirically analyzing how the function behaves near some "easy" point - say, for example, $z = 0$, plus or minus some $\epsilon$. And when you do this, you may see that it seems that if you take $\exp(-\epsilon)$ near $0$, that

$$\delta(\epsilon) := \epsilon$$

works: consider $\epsilon = 0.001$. We have $\exp(-0.001) \approx 0.99900050$, which you can check is just a hair under $0.001$ from $\exp(0) = 1$. Likewise, you will see it holds if you use your calculator with $\epsilon = 0.0001$: the needed $\delta$ seems to drop by a tenth. (In fact, this is an even better property, called Lipschitz continuity, but we won't talk that here.)

Because we believe the function is uniformly continuous, simply take that as your ansatz: that the above choice of $\delta$ works for any $\epsilon$. That is, that to get

$$|f(z_\mathrm{meas}) - f(z_\mathrm{true})| = |e^{-|z_\mathrm{meas}|} - e^{-|z_\mathrm{true}|}| < \epsilon$$

it suffices to take

$$|z_\mathrm{meas} - z_\mathrm{true}| < \epsilon$$

too. Now consider your limit: the above becomes

$$\left| \left[\lim_{n \rightarrow \infty} \left(1 - \frac{|z_\mathrm{meas}|}{n}\right)^n\right] - \left[\lim_{n \rightarrow \infty} \left(1 - \frac{|z_\mathrm{true}|}{n}\right)^n\right] \right| < \epsilon$$

or

$$\lim_{n \rightarrow \infty} \left|\left(1 - \frac{|z_\mathrm{meas}|}{n}\right)^n - \left(1 - \frac{|z_\mathrm{true}|}{n}\right)^n\right| < \epsilon$$

. And you want to show that that inequality holds good true whenever $|z_\mathrm{meas} - z_\mathrm{true}| < \epsilon$.

Now, I will stop here for now, and give two more pieces. We have from algebra:

$$a^n - b^n = (a - b)(a^{n-1} + a^{n-2} b + a^{n-3} b^2 + \cdots + a^2 b^{n-3} + ab^{n-2} + b^{n-1})$$

and we also have:

$$\left| |a| - |b| \right| \le |a - b|$$

Can you now see something about the limit expression

$$\lim_{n \rightarrow \infty} \left|\left(1 - \frac{|z_\mathrm{meas}|}{n}\right)^n - \left(1 - \frac{|z_\mathrm{true}|}{n}\right)^n\right|$$

and

$$|z_\mathrm{meas} - z_\mathrm{true}| < \epsilon$$

? Note that both have $\epsilon$; there is no $\delta$ - which is the key point regarding specifically uniform continuity in all this.

Related Question