Why is compactness needed for proof that interval outer measure is its length

lebesgue-measuremeasure-theoryreal-analysis

I read some proofs that show that the outer measure $m^*(I)$ of an interval is equal to its length $l(I)$, i.e. $m^*(I)=l(I)$, where for an interval $I=[a,b]$, we have $l(I)=b-a=m^*(I)$.

I understand the part that $m^*(I) \leq l(I)$, but for the other direction $m^*(I) \geq l(I)$, I could not see why the proofs really wanted to use the compactness property of $I$ (being bounded and closed). From what I read, outer measure of an interval $I$ is:

$$
m^*(I) = \inf \bigg\{\sum_{j\in J} l(j) \bigg\}
$$

where $J$ forms an open covering of $I$, and $j$ refers to any open interval inside the open covering $J$ – so that the outer measure gets the infimum of the sum above for all open coverings of $I$.

Since we have (for sure) that $I \subseteq J$, shouldn't it hold trivially that $m^*(I) \geq l(I)$ ? given that whether $J$ is finite or infinite countable, it should be able to cover all elements of $I$.

So why do we need to guarantee (using compactness and the Heine Borel theorem) that there is a $J$ with finite cardinality $|J| \neq \infty$ that covers $I$ to show that $m^*(I) \geq l(I)$ ?

Best Answer

Let $\epsilon>0$. By definition of $inf$ there exists an open covering $J$ such that

$$m^*(I)+\epsilon\geq \sum_{j\in J}l(j).$$

But $I$ is compact, so without lost of generality you can choose $J$ finite. However $J$ is an open finite cover of $I$, so it is clear that

$$\sum_{j\in J}l(j)\geq b-a=l(I).$$

Thus for each $\epsilon>0$ you have that

$$m^*(I)\geq l(I)-\epsilon\to_{\epsilon\to 0^+} l(I);$$

this means $m^*(I)\geq l(I)$