Probability Theory – Identically Distributed Random Variables and Zero Probability Events

measurable-functionsmeasure-theoryprobabilityprobability theoryrandom variables

Let $(\Omega,\mathcal{F},\mathbb{P})$ be a probability space and $X_1,\dots,X_{n+1}:\Omega\to \mathbb{R}$ be random variables. Suppose that the random variables are identically distributed, i.e. $$\mathbb{P}\circ X_1^{-1}(B) = \ldots = \mathbb{P}\circ X_{n+1}^{-1}(B), \ \ \forall B \in \mathcal{B}(\mathbb{R}).$$ Also suppose that there exist real-valued functions $f:\mathbb{R} \to \mathbb{R}$ and $h:\mathbb{R} \to \mathbb{R}$ such that,$$(\mathbb{P}\circ X_1^{-1})(\{x_1\in \mathbb{R}: f(x_1)\not = h(x_1) \}) \stackrel{\text{(1)}}{=} \mathbb{P}[\{\omega \in \Omega :f(X_1(\omega))\not = h(X_1(\omega))\}] = 0.$$ Now consider the remaining random variables as a random vector $X(\omega) = (X_2(\omega),\ldots , X_{n+1}(\omega)):\Omega \to \mathbb{R}^n$. I want to prove that, $$(\mathbb{P}\circ X^{-1})(\{(x_2,\ldots , x_{n+1})\in \mathbb{R}^n: f(x_2)\not = h(x_2), \ldots, f(x_{n+1})\not = h(x_{n+1}) \}) \stackrel{\text{(2)}}{=} \mathbb{P}[\{\omega \in \Omega :f(X_2(\omega))\not = h(X_2(\omega)),\dots , f(X_{n+1}(\omega))\not = h(X_{n+1}(\omega))\}] = 0.$$ I don't know whether $(1)$ and $(2)$ are true but anyway, here is my proof.

My try: Note that $$\mathbb{P}[\{\omega \in \Omega :f(X_2(\omega))\not = h(X_2(\omega)),\ldots , f(X_{n+1}(\omega)) \not = h(X_{n+1}(\omega))\}] = \mathbb{P}[B_2 \cap \ldots \cap B_{n+1}],$$ where $B_j = \{\omega \in \Omega: f(X_j(\omega))\not = h(X_j(\omega))\}$ and $j=2,\ldots , n+1$. We know $\mathbb{P}[B_j] = 0$, since random variables are identically distributed, and that the events of probability $0$ are independent of all other events. So this implies that, $$\mathbb{P}[B_2 \cap \dots \cap B_{n+1}] = \mathbb{P}[B_2]\ldots \mathbb{P}[B_{n+1}] = 0.$$

Is this reasoning correct? Is it possible to use this line of reasoning with push-forward measure $\mathbb{P}\circ X^{-1}?$ The push-forward measure assigns probability to a vector of real numbers, so I don't know how to describe the correct subset of $\mathbb{R}^n$ and apply the measure to it.

Best Answer

This is a false statement in general, but is true if you assume $f$ and $h$ are Borel measurable functions. The LNT comments above are correct if you assume Borel measurability (which is a common assumption).

For simplicity, define $g:\mathbb{R}\rightarrow\mathbb{R}$ by $g(x)=f(x)-h(x)$ for all $x \in \mathbb{R}$.

Setup:

You have identically distributed random variables $X_1, ..., X_n$ with $n\geq 2$. You have a function $g:\mathbb{R}\rightarrow\mathbb{R}$ and you are told $P[g(X_1)\neq 0]=0$. You want to evaluate $P[\cap_{i=2}^n\{g(X_i)\neq 0\}]$.

Case 1 - Suppose $g$ is a Borel measurable function:

Then $g(X_1), g(X_2), ..., g(X_n)$ are identically distributed and so $$P[g(X_i)\neq 0] = P[g(X_1)\neq 0] = 0 \quad \forall i \in \{1, ..., n\}$$ As noted in comments above by LNT (who was implicitly assuming this measurable case) we have $$\cap_{i=2}^n \{g(X_i)\neq 0\} \subseteq \{g(X_2)\neq 0\} \implies P[\cap_{i=2}^n \{g(X_i)\neq 0\}]\leq \underbrace{P[g(X_2)\neq 0]}_{0}$$ and since probabilities cannot be negative, we obtain the desired result $$ \boxed{P[\cap_{i=2}^n \{g(X_i)\neq 0\}]=0}$$

Case 2 (counter-example when $g$ is not Borel measurable):

Fix $n=2$. Strange fact: It is possible to have two random variables $X_1:\Omega\rightarrow [0,1]$ and $X_2:\Omega\rightarrow[0,1]$ on the same probability space $(\Omega, \mathcal{F}, P)$ that are both uniformly distributed over $[0,1]$, but with disjoint images:
$$X_1(\Omega)\cap X_2(\Omega) = \phi$$

See "Strange uniform random variables" by D. Rizzolo here: https://arxiv.org/abs/1301.7148

Assuming such strange uniform random variables $X_1, X_2$, define the function $g:\mathbb{R}\rightarrow\mathbb{R}$ by $$ g(x) = \left\{\begin{array}{cc} 0 & \mbox{ if $x \in X_1(\Omega)$} \\ 1 & \mbox{ else} \end{array}\right.$$ Then $$ \{g(X_1)\neq 0\} = \phi \implies P[g(X_1)\neq 0]=0$$ $$ \{g(X_2)\neq 0\} = \Omega \implies P[g(X_2)\neq 0] = 1$$ In view of Case 1, it is clear that this counter-example cannot occur unless $g$ is nonmeasurable. This means that $X_1(\Omega)$ is not Borel measurable. Indeed, the only way to get these strange uniform random variables is if their images are not Borel measurable sets.

Note: Proving $g(X_i)$ are identically distributed:

Claim: If $X,Y$ are identically distributed random variables and $g:\mathbb{R}\rightarrow\mathbb{R}$ is Borel measurable, meaning that $$ g^{-1}(B) \in \mathcal{B}(\mathbb{R}) \quad \forall B \in \mathcal{B}(\mathbb{R})$$ then $g(X)$ and $g(Y)$ are identically distributed.

Proof: Fix $B \in \mathcal{B}(\mathbb{R})$. Since $g$ is Borel measurable, we know that $g^{-1}(B) \in \mathcal{B}(\mathbb{R})$. So $\{g(X)\in B\} = \{X \in g^{-1}(B)\}$ is a valid event, as is $\{g(Y)\in B\}$.

Then we have \begin{align} P[g(X)\in B]&=P[X \in g^{-1}(B)]\\ &\overset{(a)}{=}P[Y \in g^{-1}(B)]\\ &=P[g(Y) \in B] \end{align} where (a) holds because $X, Y$ are identically distributed. $\Box$

Related Question