Probability Theory – Interpretation of Sigma Algebra

measure-theoryprobability theorystochastic-processes

My question is how to interpret sigma algebra, especially in the context of probability theory (stochastic processes included). I would like to know if there is some clear and general way to interpret sigma algebra, which can unify various ways of saying it as history, future, collection of information, size/likelihood-measurable etc?

Specifically,I hope to know how to interpret the following in some consistent way:

  • being given/conditional on a sigma algebra
  • a subset being measurable or nonmeasurable w.r.t. a sigma
    algebra
  • a mapping being measurable or nonmeasurable w.r.t. a
    sigma algebra in domain and another
    sigma algebra in codomain
  • a collection of increasing sigma algebras, i.e. a filtration of sigma algebras

Following are a list of examples that I have met. They are nice examples, but I feel their ways of interpretation are not clear and consistent enough for me to apply in practice. Even if there is no unified way to interpret all the examples, I would like to know what some different ways of interpretation are.

  1. Stopping time

    Let $(I, \leq)$ be an ordered index
    set, and let $(\Omega, \mathcal{F},\mathcal{F}_t, \mathbb{P})$ be a
    filtered probability space.

    Then a random variable $\tau : \Omega \to I$ is called a stopping time if
    $\{ \tau \leq t \} \in \mathcal{F}_{t} \forall t \in I$.

    Speaking concretely, for τ to be a
    stopping time, it should be possible
    to decide whether or not $\{ \tau \leq t \}$ has occurred on the basis of the
    knowledge of $\mathcal{F}_t$, i.e.,
    event $\{ \tau \leq t \}$ is
    $\mathcal{F}_t$-measurable.

    I was still wondering how exactly to "decide whether or not $\{ \tau \leq t \}$ has occurred on the basis of the knowledge of $\mathcal{F}_t$, i.e., event $\{ \tau \leq t \}$ is $\mathcal{F}_t$-measurable."

  2. Martingale process

    If a stochastic process $Y : T \times \Omega \rightarrow S$ is a martingale
    with respect to a filtration $\{ \Sigma_t\}$ and probability measure
    $P$, then for all s and t with $s < t$ and all $F \in \Sigma_s$,
    $$Y_s = \mathbf{E}_{\mathbf{P}} ( Y_t | \Sigma_s ),$$

    where $\Sigma_s $ is interpreted as "history".

    I was also wondering how $\Sigma_s, s < t$ can act as history, $\Sigma_s, s=t$ as present, and $\Sigma_s, s > t$ as future?

  3. I originally interpret a measurable
    subset wrt a sigma algebra as a
    subset whose "size"/"likelihood" is measurable,
    and the class of such
    size-measurable subsets must be
    closed under complement and
    countable union.
  4. In a post by Nate Eldredge, a
    measurable subset wrt a sigma
    algebra is interpreted by analogy of questions being answered:

    If I know the answer to a question
    $A$, then I also know the answer to
    its negation, which corresponds to the
    set $A^c$ (e.g. "Is the dodo
    not-extinct?"). So any information
    that is enough to answer question $A$
    is also enough to answer question
    $A^c$. Thus $\mathcal{F}$ should be
    closed under taking complements.
    Likewise, if I know the answer to
    questions $A,B$, I also know the
    answer to their disjunction $A \cup B$
    ("Are either the dodo or the elephant
    extinct?"), so $\mathcal{F}$ must also
    be closed under (finite) unions.
    Countable unions require more of a
    stretch, but imagine asking an
    infinite sequence of questions
    "converging" on a final question.
    ("Can elephants live to be 90? Can
    they live to be 99? Can they live to
    be 99.9?" In the end, I know whether
    elephants can live to be 100.)

Thanks in advance for sharing your views, and any reference that has related discussion is also appreciated!

Best Answer

Gambling is a good starting-point for probability. We can treat $\sigma$-field as a structure of events as we need to define the addition and multiplication for numbers. The completeness of the real numbers is suitable for our calculations, and $\sigma$-field plays the same role.

I hope the following gambling example helps you to understand the filtration and conditional expectation.

Assuming that two people, say player A and player B, bet on the results of two coin tosses. H: head T: tail

At the time $0$, A and B do not know anything about the result except that one of the events in $\Omega=\{HH,HT,TH,TT\}$ will happen. Hence the information at time $0$ that they both know is $\mathcal{F}_0=\{\emptyset,\Omega\}$.

At the time $1$, the coin had been tossed only once; and they know that the events in the $\sigma$-field $\mathcal{F}_1=\{\emptyset, \Omega, \{HH,HT\},\{TH,TT\}\}\supset \mathcal{F}_0 $ could happen.

At the time $2$, the coin had been tossed twice; and they know that the events in the $\sigma$-field $\mathcal{F}_2=\{\emptyset, \Omega,\{HH,HT\},\{TH,TT\},\{HH\},\{HT\},\{TH\},\{TT\}\}\supset \mathcal{F}_1$ could happen which means they know everything about the gambling results.

Please notice the evolution of information characterized by the filtrations $\mathcal{F}_0,\mathcal{F}_1,\mathcal{F}_2.$ With time passing, the unknown world $\Omega$ is divided more finely. It is something like water flows through pipes.

Assuming that they bet on the following results and the coin is fair. $$X(\omega)=\left\{ \begin{array}{l} 2, \omega=HH,\mbox{means the first tossing is H, and the second tossing is H}\\ 1, \omega=HT,\mbox{means the first tossing is H, and the second tossing is T}\\ 1, \omega=TH,\mbox{means the first tossing is T, and the second tossing is H} \\ 0, \omega=TT,\mbox{means the first tossing is T, and the second tossing is T}\\ \end{array} \right.$$

Then, we have

$$E[X|\mathcal{F}_0](\omega)=1\qquad\text{for every}\ \omega $$ $$E[X|\mathcal{F_2}](\omega)=X(\omega)\qquad\text{for every}\ \omega $$ $$E[X|\{HH,HT\}]=2P(HH|\{HH,HT\})+1P(HT|\{HH,HT\})$$ $$+1P(TH|\{HH,HT\})+0P(TT|\{HH,HT\})=\frac{3}{2}$$ $$E[X|\{TH,TT\}]=2P(HH|\{TH,TT\})+1P(HT|\{TH,TT\})$$ $$+1P(TH|\{TH,TT\})+0P(TT|\{TH,TT\})=\frac{1}{2} $$

$$E[X|\mathcal{F_1}](\omega)=\left\{ \begin{array}{l} \frac{3}{2}, \omega\in \{HH,HT\}\\ \frac{1}{2}, \omega \in \{TH,TT\} \end{array} \right. $$

I hope those would be helpful.

Related Question