Why is $C([0;1])$ with the supremum metric complete, but it is not with the integral metric

complete-spacesmetric-spacesreal-analysissequence-of-function

I am currently studying metric spaces in my mathematical analysis course and I came across two examples:
First – show that the set of continuous functions on a closed interval (denoted as $C([a;b]$) equipped with the supremum metric is a complete metric space
Second – show that the same set equipped this time with the integral metric is NOT a complete metric space

I understood the proofs provided, but am a little confused as to why those two different metric provide different convergences geometrically. The proof of the second statement constructed a sequence of functions, given by the formula

$ f_n =\operatorname{sign}(x) \sqrt[n]{|x|}.$

The way I try to visualize this sequence, it would seem that the function sequence approaches a vertical slope near zero, which would then make sense that it would not be convergent (since functions cannot have vertical slopes). But it is confusing to me as to why the supremum metric would be fine with this. Is it because the supremum metric simply takes discrete values, whereas the integral metric would see a discontiuous “jump” in values near zero?

Best Answer

The way I think of it is that the integral metric is less "restrictive" to the continuous functions than the supremum metric. Think about it, if you enforce the condition $$\int_{[a,b]} |f(x)|dx=\varepsilon$$ how many functions do you think comply with it? It's too non specific. In my mind, I can visualize all kinds of functions, and all very different from each other, that have this same area under its graph. Then, in a sense, you can have sequences of which neighbouring terms can be arbitrarily close, but because the integral-balls are too "large", even those sequences may swim around and not converge.

The supremum norm however is just "restrictive" enough so that those types of sequences always converge. If you enforce $$\sup_{x\in[a,b]} |f(x)|=\varepsilon$$ there are still many functions that verify this, but it seems like there are "less" of them. Wiggling the graph in my mind for a given "sup cap" I can't make then too wildly different from each other.

But at the end of the day these are just vague notions that I ended up building to conceptualize why some spaces are complete with a norm and not with another. The real reasons are of course in the proofs of these statements. But I think it is still fair to say that some norms are not restrictive enough and that's why completeness ends up failing.

Related Question