Real Analysis – Understanding Baby Rudin’s Theorem 7.23

real-analysissequences-and-series

The theorem is following.

If $\{f_n\}$ is a pointwise bounded sequence of complex functions on a countable set E, then $\{f_n\}$ has a subsequence $\{f_{n_k}\}$ such that $\{f_{n_k}(x)\}$ converges for every $x\in E$.

Proof

Let $\{x_i\}, i=1,2,3,\ldots$, be the points of E, arranged in a sequence. Since $\{f_n(x_1)$ is bounded, there exists a subsequence, which we shall denote by $\{f_{1,k}\}$, such that $\{f_{1,k}(x_1)\}$ converges as $k\to\infty$.

Let us now consider sequences $S_1$,$S_2$,$S_3$,$\ldots$, which we represent by the array

$$
S_1:f_{1,1}f_{1,2}f_{1,3}f_{1,4}\cdots
\\\,S_2:f_{2,1}f_{2,2}f_{2,3}f_{2,4}\cdots
\\\,S_3:f_{3,1}f_{3,2}f_{3,3}f_{3,4}\cdots
\\\,\cdots\cdots\cdots\cdots
$$

and which have the following properties:

(a) $S_n$ is a subsequence of $S_{n−1}$, for $n=2,3,4,\ldots$

(b) $\{f_{n,k}(x_n)\}$ converges, as $k\to\infty$ (the boundedness of $\{f_n(x_n)\}$ make it possible to choose $S_n$ in this way);

(c) The order in which the functions appear is the same in each sequence; i.e., if one function precedes another in $S_1$, they are in the same relation in every $S_n$,until one or the other is deleted. Hence, when going from one row in the above array to the next blow, functions ,may move to the left but never to the right.

We now go down the diagonal of the array; i.e, we consider the sequence

$$
S:f_{1,1}f_{2,2}f_{3,3}f_{4,4}\ldots.
$$

By (c), the sequence $S$ (except possibly its first $n−1$ terms) is a subsequence of $S_n$, for $n=1,2,3,\ldots$. Hence (b) implies that $\{f_{n,n}(x_i)\}$ converges, as $n\to\infty$, for every $x_i\in E$.

The end of Proof.

My question is why is (c) necessary? From (a) and the definition of the subsequence, the element of each $S_n$ (ex. $f_{1,1}$) is ordered, so condition (c) is not needed?

(I am new to this site and my English is not very good, sorry if I used some rude expressions.)

Best Answer

Condition (c) prohibits situations where, for example, both $f_{1,3}$ and $f_{1,8}$ are in the subsequence $S_2$, but out of order, perhaps $f_{1,3}=f_{2,75}$ and $f_{1,8}=f_{2,2}$. In the prohibited situation, $S_2$ would not be a subsequence of $S_1$ (even though the set of terms in $S_1$ is a subset of the set of terms of $S_2$). In the absence of condition (c), such maneuvers (and more complicated shuffles) would be allowed (at every stage of the process that produces the $S_n$'s) and such shuffling could completely mess up the diagonal sequence $S$. For example, you could use Rudin's $S_n$ but re-order the terms in each $S_n$ to interchange some useless term in $S_n$ with the diagonal element $f_{n,n}$. Then your new diagonal sequence $S$ would consist of all those useless terms, and (barring extreme luck) the diagonal sequence wouldn't converge as the theorem claims.

A rough summary of the preceding paragraph is that, after getting each $S_n$ to converge at $x_n$, you need the diagonal sequence $S$ to be, except for finitely many terms, a subsequence of (or at least seriously related to) all these $S_n$'s. And if each $S_n$ might have been shuffled at will, there's no way to ensure that. So (c) says "don't shuffle."