Let $(\Omega, F, P)$ be a probability space and let $X_n: \Omega \to \mathbb{R}$ be a sequence of random variables. My question is, how do we define lim-inf ($X_n$)? Is it the limit of the infimum of the values taken by $X_n$ over $\Omega$?
Limit of infimum of random variable
measure-theoryprobability theoryrandom variables
Related Solutions
I cannot comment yet, so I'm posting this as an answer.$\def\ci{\perp\!\!\!\perp}$
This is probably not what you were asking, but I think it's interesting and relevant enough to post.
It's known possible to construct arbitrary distributions from uniform variables. Furthermore, given a $\mathcal U[0,1]$ variable, it's possible to produce from it an i.i.d sequence of such variables, which can then be used to obtain more general distributions. We can always extend a space to obtain such variables by$$\hat{\Omega}=\Omega\times[0,1]\text{, }\hat{\mathscr{A}}=\mathscr{A}\otimes\mathscr{B}\text{, }\hat{P}=P\otimes\lambda $$ in which case $\vartheta(\omega,t):= t$ is $\mathcal{U}[0,1]$ and $\vartheta\ci \mathscr{A}$.
For more details see Kallenberg - Foundations of Modern Probability (2002), in particular the discussion before Theorem 6.10 (transfer).
The answer of Will Nelson in the thread you linked looks fine to me. The argument where „$\Omega$ is countable and $(X_n)_{n\in\mathbb N}$ is a sequence of random variables such that their pushforward measures converge weakly to a continuous measure“ is flawed. The reason is that in this case there must be an $\mathcal F$-atom $A$ of positive $\mathbb P$-measure and then when $x_n$ is the value of $X_n$ on $A$, there must be a limit point $y\in[-\infty, +\infty]$ of the sequence $(x_n)_{n\in\mathbb N}$. It is then not difficult to check that $\mu(\{y\})\geq\mathbb P(A)>0$ and thus $\mu$ cannot be continuous.
Edit: With an $\mathcal F$-atom I meant an atom of $\mathcal F$ understood as a boolean algebra. In this case this means some $A\neq\emptyset$ in $\mathcal F$ so that if $B\subsetneq A$ and $B\in\mathcal F$ then $B=\emptyset$. Such an atom $A$ with positive $\mu$-measure most exist, which can be seen as follows: Since $\Omega$ is countable and $\mathcal F$ is a $\sigma$-algebra, for every $\omega\in\Omega$ there is a unique $\mathcal F$-atom $A_\omega$ with $\omega\in\Omega$. This can be seen by an application of Zorn's lemma for example. Consider $Z=\{B\in\mathcal F\mid \omega\in B\}$ ordered by $\supseteq$. As $\Omega$ is countable, any chain in $(Z, \supseteq)$ is countable and as $\mathcal F$ is closed under countable intersections, every chain there has an upper bound (namely the intersection over all elements of the chain). Thus there is a maximal element in $Z$, it is now easy to see that this is what we were looking for. Now finally, $\Omega=\bigcup_{\omega\in\Omega} A_\omega$ and as $\Omega$ is countable, we may use countable additivity of $\mu$ to see that $$1=\mu(\Omega)\leq\Sigma_{\omega\in\Omega}\mu(A_\omega)$$ hence there must be some $\omega$ so that $A_\omega$ has positive measure.
Best Answer
Probably someone's posted here before on the definition of lim inf, and from comments it appears that that, rather than anything about random variables is the locus of most of what is being asked about. I'll give a definition and some comments, but also throw in some remarks about the application to random variables.
Consider the sequence $$ 4.9,\,6.1,\,4.99,\, 6.01,\, 4.999,\, 6.001,\, 4.9999,\, 6.0001,\,\ldots $$ The limit inf, or limit inferior, of this sequence is $5,$ and the lim sup is $6.$ This means that for all $\varepsilon>0,$ all but finitely many terms in the sequence are $>5-\varepsilon$ and that is not true of any number bigger than $5,$ and all but finitely many are $<6+\varepsilon$ and that is not true of any number less than $6.$
In probability theory one could say that an "outcome" $\omega$ is randomly chosen and it determines all values $X_1(\omega), X_2(\omega), X_3(\omega),\ldots,$ and their common dependence on the randomly chosen outcome $\omega$ is where they get their randomness.
But for each separate value of $\omega,$ the lim inf and lim sup are just the lim inf and lim sup of a sequence of numbers, defined as above with no reference to randomness.
The dependence of $\liminf\limits_{n\to\infty} X_n(\omega)$ upon $\omega$ makes the lim inf a random variable in its own right, so one might ask what its distribution is, or what its expected value is, an so on.
Here's an exercise (but probably it's more than an exercise): Toss $n$ coins. Forfeit the "tails"; keep the "heads". Repeat until the number of remaining coins is either $0$ or $1.$ Show that the lim inf as $n$ grows, of the probability that it's $0$ rather than $1$ differs from the lim sup by a small positive number (about $10^{-4}$ or $10^{-5},$ I think?).