[Math] Proving that a Closed Interval Is Compact

compactnessreal-analysis

My text (Stoll, Introduction to Real Analysis, 2nd Ed) defined that $K$, a subset of $\mathbb R$, is compact if every open cover of $K$ has a finite subcover of $K$. Then, it proceeded to prove that every closed interval is compact. To my best knowledge, the proof is standard, but it feels like that it glosses over something:

Let $\{U_\alpha\}_{\alpha \in A}$ be an open cover of $[a, b]$,
$E = \{r \in [a, b] : [a, r]$ is covered by a finite number of $U_\alpha\}$

$E$ is bounded above by $b$, and is nonempty because $a \in E$. So, sup$E$ exists. Let $c =$ sup$E$. Furthermore, since $b$ is an upper bound, $c \le b$. Show that $c \in E$. Since $c \in [a, b]$, $c \in U_\beta$ for some $\beta \in A$. $U_\beta$ is open; there is a $\delta$ such that $N_\delta(c) \subset U_\beta$. Then, since $c – \delta$ is not an upper bound, there is a $r$ such that $c – \delta \lt r \le c$. $[a, r]$ is covered by a finite number of $U_\alpha$, say $\{U_{\alpha_1}, U_{\alpha_2}, … U_{\alpha_n}\}$, and $[a, c]$ is covered by $\{U_{\alpha_1}, U_{\alpha_2}, … U_{\alpha_n}, U_\beta\}$. Therefore, $c \in E$.

I will skip the part showing that $c = b$. My question has to do with the set $E$. At first, I find it circular. We are asked to prove that $[a, b]$ is compact, but from the start, we are already saying that $[a, r] \subset \bigcup^n_{j = 1} U_{\alpha_j}$. Here, I take $\{U_\alpha\}_{\alpha \in A}$ as arbitrary but fixed. Intuitively, it is possible that $[a, r] \subset \bigcup^n_{j = 1} U_{\alpha_j}$ because $r$ is a limit point of $[a, r]$, so every point "close enough" to $r$ is covered by the same $U_r$ that covers $r$. However, I don't think that alone justifies that $[a, r]$ is covered by a finite subcover. What am I missing?

Best Answer

I think you have to forget for a moment about limit points and look at the proof this way: it's a sort of "induction" on $r$. First you prove that $E$ is non empty because for $r=a$ you have at least one open set $U_\alpha$ such that $a\in U_\alpha$. Right? Then you want to prove that $r$ can "reach" $b$. There is nothing circular in this.

EDIT. Of course $E$ has infinitely many elements: $E = [a,b]$. :-) And all the trick is as follows (your proof already says it and you too are saying it): once you have some $r\in E$, by definition, you've got some $U_r$ such that $r\in U_r$ and hence some $\delta > 0$ such that $(r, r+\delta) \subset U_r$. So you can pick $r'\in (r,r+\delta)$ and you'll have that:

  • $[a,r]$ is covered by a finite number of open sets $U_\alpha$,
  • $[r,r']$ is covered by $U_r$.

Hence, $[a,r'] = [a,r] \cup [r,r']$ is covered by a finite number of open sets $U_\alpha$ (the ones that covered $[a,r]$, plus the one that covers $[r,r']$).

This way, you can "push" that $r$ to the right indifinitively.

(Of course, what I've written is not a proof: those "moving" $r$ could stop before reaching $b$ in my reasoning. The correct way to do it is as in your book, taking $\mathrm{sup}$ every time you "push" $r$ to the right. But if now you've understood the trick without the $\mathrm{sup}$ thing, maybe you can read again your book with it.)