[Math] Monotone Convergence Theorem for Riemann Integrable functions

analysisfunctional-analysisreal-analysis

I'm having a really hard time proving this statement (this is not homework):

If $f_{n} : [0,1] \rightarrow \mathbb{R}$ is a Riemann integrable function for all $n \in \mathbb{N}$, and $0 \leq f_{n + 1} \leq f_{n}$, and $\lim \limits_{n \rightarrow \infty} f_{n} = 0$, I need to prove that $\lim \limits_{n \rightarrow \infty} \int \limits_{0}^{1} f_{n}(x) \text{ } \mathrm{d}x = 0$.

I'm not allowed to use the Monotone Convergence Theorem for Riemann integrable functions (proving this is actually the first step in proving the MCT).

Now, I know $\lim \limits_{n \rightarrow \infty} \int \limits_{0}^{1} f_{n}(x) \text{ } \mathrm{d}x$ exists because the sequence $\left \{ \int \limits_{0}^{1} f_{n}(x) \text{ } \mathrm{d}x \right \}_{n =1}^{\infty}$ is a monotonically decreasing sequence that is bounded from below by $0$. However, I have no idea how to prove the limit is $0$.

Also, there is a hint to the problem. Assume $\lim \limits_{n \rightarrow \infty} \int \limits_{0}^{1} f_{n}(x) \text{ } \mathrm{d}x = \epsilon > 0$. I must choose a partition $P_{n}$ for $f_{n}$ such that $P_{n + 1}$ is a refinement partition of $P_{n}$ and show that there exists an element in $[0,1]$ such that $f_{n}$ converges to some strictly positive value on that element, which would lead to a contradiction of the hypothesis of pointwise convergence to $0$.

I thought of using a sequence of closed intervals that are nested, because their intersection would be nonempty (since closed subsets of compact spaces are compact), but I can't construct the sequence. If the hint makes the problem harder, is there an easier way to prove this statement? Any help would be greatly appreciated.

Best Answer

We need to prove the following lemma:

Lemma: If $f$ is non-negative and Riemann integrable on $[a, b]$ with $$\int_{a}^{b}f(x)\,dx > 0$$ then there is a sub-interval of $[a, b]$ on which $f$ is positive.

The lemma is proved easily by using the definition of Riemann integral. Let $$I = \int_{a}^{b}f(x)\,dx > 0$$ and let $0 \leq f(x) < M$ for all $x \in [a, b]$. By definition of Riemann integral there is a partition $$P = \{x_{0}, x_{1}, x_{2}, \dots, x_{n}\}$$ of $[a, b]$ such that the Riemann sum $$S(f, P) = \sum_{i = 1}^{n}f(t_{i})(x_{i} - x_{i - 1}) > \frac{I}{2}$$ Let $$\epsilon = \frac{I}{2(M + b - a)}$$ and consider the set $J = \{x: x \in [a,b], f(x) \geq \epsilon\}$. Now we split the Riemann sum $S(f, P)$ as $$S(f, P) = \sum_{i \in A}f(t_{i})(x_{i}- x_{i - 1}) + \sum_{i \in B}f(t_{i})(x_{i}- x_{i - 1})$$ where $$A = \{i: [x_{i - 1}, x_{i}] \subseteq J\}, B = \{i: i \notin A\}$$ Now for $i \in A$ we can see that $f(t_{i}) \leq M$ and for $i \in B$ we can choose $t_{i}$ such that $f(t_{i}) < \epsilon$. Then we can see that $$\epsilon(M + b - a) = \frac{I}{2} < S(f, P) < M\sum_{i \in A}(x_{i} - x_{i - 1}) + \epsilon(b - a)$$ It now follows by simple algebra that $$\sum_{i\in A}(x_{i} - x_{i - 1}) > \epsilon$$ and therefore there exist intervals of type $[x_{i - 1}, x_{i}]$ where $i \in A$ where $f(x) \geq \epsilon$. The above proof is taken from my favorite book Mathematical Analysis by Tom M. Apostol.

Now we need to make use of this lemma for solving the current problem. Let's assume that $$\lim_{n \to \infty}\int_{0}^{1}f_{n}(x)\,dx = \lim_{n \to \infty}I_{n} = c > 0$$ Then it follows that after a certain value of $n$ we have $I_{n} > c/2$. And by our lemma it follows that $f_{n}(x) \geq d > 0$ where $d$ is some constant dependent only on $c$ and not on $n$. Moreover we see that there is a partition $P_{n}$ of $[0, 1]$ such that $f_{n}(x)\geq d$ in some of the sub-intervals made by $P_{n}$ (whose total length also exceeds $d$) and $f_{n}(x) < d$ for some point in remaining sub-intervals. If necessary we can replace $P_{n + 1}$ by $P_{n} \cup P_{n + 1}$ and hence it is OK to assume that $P_{n + 1}$ is a refinement of $P_{n}$.

Since $f_{n}$ is monotonically decreasing it follows that the sub-intervals of partition $P_{n + 1}$ where $f_{n + 1}(x) \geq d$ are contained in the corresponding sub-intervals of $P_{n}$ and again these sub-intervals of $P_{n + 1}$ have a total length greater than $d$. Thus it follows that there is a sub-interval of $[0, 1]$ where $f_{n}(x) \geq d > 0$ for all $n$ after a certain value. This contradicts the fact that $f_{n}$ converges pointwise to $0$.


The above proof is wrong and the wrong inferences are striked out. The approach however can be salvaged with some more effort. An alternative approach is given in this answer.