This is a good question. I haven't yet worked out a complete answer myself, but Mariano's comment above is definitely part of it: the given product $\sigma$-algebra has the property which is analogous to that of the product topology (which it resembles and is surely modelled on): it is the smallest $\sigma$-algebra which makes all of the projections measurable.
Because of this, I believe that if you make a category out of all measurable spaces and measurable functions in the obvious way, the product $\sigma$-algebra you have defined turns out to be the categorical product: i.e., it satisfies the requisite universal mapping property. Again, this is the situation for the product topology.
But I think the rest of the explanation has to do with the fact that this $\sigma$-algebra gives you the theorems you want, just as the product topology -- and not the "box topology" for instance -- has nice properties, especially Tychonoff's theorem. (The product topology was introduced for the first time in Tychonoff's paper, and his theorem played a large role in convincing mathematicians that it was the "right" topology on an infinite Cartesian product.)
I'm not sure exactly what the analogous result to Tychonoff's theorem is here, but I do know that this "coarsest" product $\sigma$-algebra enables one to define arbitrary products of probability spaces: see this lovely paper of S. Saeki1 for an incredibly short proof of that. I hope it is at least clear where the "coarseness" of the chosen product $\sigma$-algebra comes in handy: if (say in the case of a countably infinite index set, to fix ideas) (added: Michael Greinecker's answer shows that countable products actually behave rather well, so let's instead think about uncountable products) we allowed arbitrary products $Y = \prod_{i} Y_i$ of
measurable subsets $Y_i \subset X_i$ to be measurable, then what should the measure of
$Y$ be? If the set of indices for which $\mu(Y_i) < 1$ is uncountable, the product (taking the net of finite subsets of $I$) must approach zero. By requiring $Y_i = X_i$ for almost every $i$, we get that $\mu_i(Y_i) = 1$ for almost every $i$ and the infinite
product is really a finite product.
Is there more to the story than this? Is the above construction of the product probability measure the "right" analogue of Tychonoff's theorem in this context (is there even a "right" analogue of Tychonoff's theorem in this context?)? I'm not sure, and I would be interested to hear more from others.
1 Saeki, Sadahiro, A proof of the existence of infinite product probability measures, Am. Math. Mon. 103, No. 8, 682-683 (1996). ZBL0882.28005, MR1413587. Link to a snapshot in the Wayback Machine.
For a countable collection of separable metric spaces (like $\mathbb{R}^\infty$) these two $\sigma$-algebras are actually equal; see, for example, Kallenberg's Foundations of Modern Probability, second edition, page 3.
In general the inclusion need not hold. See Michael Greinecker's example in the comment.
Best Answer
It seems that some of the confusion lives on in the answers and comments of others. After consulting the literature (I am not an expert in this area), I think I know what the confusion is about.
There are two common situations that arise when considering infinite families of probability measures. The first one concerns arbitrary products of probability spaces:
A proposition in measure theoretic probability says that the answer to both questions is yes in all cases. Proofs of this theorem can be found in many of the more advanced textbooks on measure theory or probability. (See references below.) Note: though the proof uses only basic measure theory, it only works if the factors in the product are probability spaces. It does not extend to arbitrary measure spaces!
The second situation is adressed by Kolmogorov's extension theorem. Here the independence requirement is replaced by a much weaker consistency criterion.
It turns out that the answer to this question is no in general (see Cohn, exercise 10.6.5). However, if the spaces $(\Omega_i,\mathscr A_i)$ are sufficiently nice, the answer is yes, and luckily this still covers many practical uses.
So what exactly is the difference between the two situations? To answer this question, you must convince yourself that the consistency requirement is much weaker than independence. In fact, situation 2 can be used to find a joint distribution for an infinite family of dependent random variables! This plays an important role in the theory of stochastic processes. (Indeed, Bauer proves the result in the first section of his chapter on stochastic processes; see references below.)
It is of course possible to use Kolmogorov's extension theorem in order to form infinite products of independent random variables: we simply specify the probability measure on every finite product $(\Omega_F,\mathscr A_F)$ to be the one that makes the random variables $\pi_i$ independent. It seems that the author of OP's lecture notes mistakenly thought that this is the only way to construct infinite sequences of independent random variables, and therefore incorrectly concluded that such sequences only exist in certain (topological) cases. An understandable mistake, but confusing nonetheless.
As a final word, those who have seen such constructions in other areas of mathematics may recognise situation 1 as a product of probability spaces and situation 2 as a projective limit of probability spaces. This nomenclature is also used by Bauer; see references below.
References:
Donald L. Cohn, Measure Theory (Second Edition), Birkhäuser Advanced Texts: Basler Lehrbücher, 2013. Both theorems are treated in section 10.6, and it was this exposition that led me to understand the difference between the two results. The result from situation 1 is proved only for countable products in the main text; the general case is deferred to the exercises. Furthermore, an outline of a counterexample for the general case of situation 2 is given in the exercises.
Heinz Bauer, Robert R. Burckel (translator), Probability Theory, de Gruyter Studies in Mathematics 23, Walter de Gruyter, 1996. This is a very technical (but nevertheless great) introduction to probability theory from a measure theoretic point of view, which assumes knowledge of measure theory as a prerequisite. Situation 1 is covered in §9 (infinite products of probability spaces), and situation 2 is covered in §35 (projective limits of probability measures). The independence requirement in situation 1 is somewhat hidden in the wording of theorem 9.2, but it follows from the remarks preceding the theorem. An outline of a counterexample for the general case of situation 2 is given at the end of §35.
Paul R. Halmos, Measure Theory, Graduate Texts in Mathematics 18, Springer, 1974 (reprint of the 1950 edition by Van Nostrand). This book includes a clear proof and helpful remarks for situation 1 in §38 (infinite dimensional product spaces). Again the result is only proven for countable products, and the general case is deferred to the exercises. Like Bauer, he formulates the theorem without using the word independence. The book does not seem to treat situation 2.
Jacques Neveu supposedly also addresses situation 1 in his book Mathematical Foundations of the Calculus of Probability (translated from French), but I don't seem to have access to this book.