[Math] Unnecessary simple function in the proof of Lebesgue’s convergence theorem in Baby Rudin

lebesgue-integralmeasure-theoryreal-analysis

Baby Rudin, page 318, 11.28 Lebesgue's monotone convergence theorem

Suppose $E\in\mathfrak M$ (where $\mathfrak M$ is a $\sigma$-ring and $\mu$ is a nonnegative measure on $\mathfrak M$). Let $\{f_n\}$ be a sequence of measurable functions such that
$$0\le f_1(x)\le f_2(x)\le\dotsb\qquad(x\in E)$$
Let $f$ be defined by
$$f_n(x)\to f(x)\qquad(x\in E)$$
as $n\to\infty$. Then
$$\int_Ef_nd\mu\to\int_Efd\mu$$

First he proved that if $\int_Ef_nd\mu\to\alpha$, then $\alpha\le\int_Efd\mu$. There's no problem.

Next he introduced an arbitrary constant $0<c<1$ and a simple measurable function $s$ such that $0\le s\le f$, put
$$E_n=\left\{\,x\,\big\vert\,f_n(x)\ge cs(x)\,\right\}$$
and claimed that $E_1\subset E_2\subset E_3\subset\dotsb$, and
$$E=\bigcup_{n=1}^\infty E_n$$
For every $n$,
$$\int_E f_nd\mu\ge\int_{E_n}f_nd\mu\ge c\int_{E_n}sd\mu$$
and let $n\to\infty$, we have $\alpha\ge c\int_Esd\mu$. Let $c\to1$, we see that
$$\alpha\ge\int_Esd\mu$$
Since $s$ is arbitrary, we obtain the result.

I have a question: why should we introduce such $s$? Since $f_n$ is measurable, we have $f$ is measurable. Replace $s$ with $f$ directly, we can still obtain measurable sequence $E_n$. Each inference is independent with the simplicity of $s$, so I think it's unnecessary?

Am I wrong? If so, where's my mistake? Thanks!

Best Answer

Update: I have Rudin's Principles of Mathematical Analysis (PMA) and Real and Complex Analysis (RCA) in front of me now and I have to agree with you - there is no need to use a simple function $s$ instead of $f$ itself in PMA's proof of the monotone convergence theorem since the heavy lifting with simple functions has already been done in theorem 11.24. But the interesting part is comparing this to RCA. There the proof is identical and the theorems corresponding to PMA's 11.3 and 11.24 are 1.19(d) and 1.25, respectively, with one big difference: 1.25 is only stated for nonnegative measurable simple functions whereas PMA's 11.24 was stated for nonnegative measurable functions. Therefore RCA's proof has to use simple functions.

So, unless someone else can spot why one would still need to use simple functions in PMA's proof, I'm assuming that professor Rudin simply used the same proof in both books and for some reason didn't streamline it for PMA. The proof is still correct, of course.

I have the third edition of both books and it would be interesting to know if the proof is the same in earlier editions. After all, according to Wikipedia, Rudin wrote PMA first, so the proof in RCA couldn't have influenced his decisions for the first edition. In any case, a very nice observation by you.

I'm leaving my original answer below so that your comment still makes sense and hopefully it could still serve as food for thought for some.


If you were to replace $s$ by $f$ you would be using circular reasoning. See the line $$\int_E f_n \, \mathrm{d}\mu \geq \int_{E_n} f_n \, \mathrm{d}\mu \geq c\int_{E_n}s \, \mathrm{d}\mu = c\int_E s1_{E_n} \, \mathrm{d}\mu,$$ where $1_{E_n}$ is the indicator (or characteristic) function, would become $$\int_E f_n \, \mathrm{d}\mu \geq \int_{E_n} f_n \, \mathrm{d}\mu \geq c\int_{E_n}f \, \mathrm{d}\mu = c\int_E f1_{E_n}\, \mathrm{d}\mu.$$ But we can't yet conclude $$\lim_{n \to \infty} c\int_E f1_{E_n}\, \mathrm{d}\mu = c\int_E f \, \mathrm{d}\mu$$ because $f1_{E_n}$ is not a simple function and we are in the process of proving the monotone convergence theorem that would justify this conclusion. We have to use simple functions since for them this convergence is true by the definition of the integral.

Related Question