Cardinality of a $\sigma$-algebra over a set of independent random variables.

independencemeasure-theoryprobability theoryrandom variables

Problem statement

Let us say that we have some set of independent random variables: $X = \{X_i\}^n_{i=1}$ defined over a probability space of $~(\Omega, \mathcal{F}, P)$. I want to understand whether the following holds:

If $\mathcal{F}_X = \sigma\left(\left\{X_1, X_2, \dots, X_n\right\}\right)$, then is it, in general, true that: $\left|\mathcal{F}_X\right| = \sum\limits_{\forall i \in I_n}|\sigma(\{X_i\})|$?

As we know (and as proposed here), as $X$ consists of independent R.V.-s, then we can state that $\forall i,j \in I_n:\sigma(\{X_i\})$ and $\sigma(\{X_j\})$ are independent. But how to proceed from this fact to the split of the cardinality of $\mathcal{F}_X$?

I would appreciate any help, thank you in advance!

Related questions:

  1. Independent random variables and sigma algebras
  2. Cardinality of a sigma algebra of two independent random variables

Best Answer

As correctly mentioned by @LeanderTilstedKristensen, it can be seen, that one of the linked questions already contains a counter-example for the case of two independent random variables. It should not be that hard to show then that in general the proposed cardinality relation does not hold!

Related Question