Understanding Conditional Expectation and relation to Crossed Product

conditional-expectationmartingalesoperator-algebrasprobabilityrandom variables

Let $\mathcal{A}$ be a unital $\Gamma$$C^*$-algebra. Then one can form the reduced crossed product $C^*$-algebra $\mathcal{A}\rtimes_r\Gamma$. The reduced crossed product comes equipped with a canonical conditional expectation $\mathbb{E}$, which a unital completely positive map from $\mathcal{A}\rtimes_r\Gamma$ onto $\mathcal{A}$.

I want to understand why it is called a conditional expectation and the theory around it. I know that there is a notion of conditional expectation in the probability theory which is defined as the following:

Let $(\Omega,F,P)$ be a probability triplet i.e $\Omega$ is a set, $F$ is a $\sigma$-algebra of subsets of $\Omega$ and $P$ is a $\sigma$-additive probability measure defined on $F$. Let $G\subset F$ be a sub-$\sigma$-algebra, then $L^2(\Omega,G,P)$ is a closed subspace of $L^2(\Omega,F,P)$ and the orthogonal projection of $L^2(F)$ on $L^2(G)$ is called $E(.|G)$. For $x \in L^2(G)$, $E(x|G)$ is called the conditional expectation of $x$ with respect to $G$.

This part I understand but I have trouble when there is discussion about Martingales. For example, consider the following: Let $\mathcal{B}_n$ be an increasing sequence of $\sigma$-algebras in some probability space $(X,\mathcal{B},\nu)$, that tends to $\mathcal{B}$, which means that $\mathcal{B}$ is the sigma algebra generated by the sets in $\cup_n\mathcal{B}_n$.

A sequence of random variables $\{\mathcal{M}\}_n$ is a bounded martingale w.r.t $\{\mathcal{B}_n\}$ if

$(i)$ $E(|M_n|) < \infty$

$(ii)$ $M_n$ is $\mathcal{B}_n$-measurable

$(iii)$ $E\left(M_{n+1}|\mathcal{B}_n\right)=M_n$

I have trouble understanding the bit $(iii)$ of the above definition. What does the left hand side mean? How is it defined?

It would be also very helpful, if I am directed to some source or book which helps with my understanding.

Thanks a lot!!

Best Answer

It looks like you gave the definition of the left hand side of (iii) in your definition of the conditional expectation. For an example, if $X_i$ are i.i.d. variables distributed uniformly on $\{-1, 1\}$ (maybe modeling wins/losses on a coin flip bet), then $M_n = \sum_1^n X_i$ (i.e. total winnings so far) is a martingale, where $\mathcal{B}_n$ is the $\sigma$-algebra generated by $\{X_1, ..., X_n\}$. One way of saying this is that - given the current state of the game - the expected value for the winnings on the next roll is always the same as the current winnings. Maybe check that $E(M_2 | \sigma(X_1)) = X_1$ using the definitions, if this is still unclear.

That said, you don't necessarily need martingales to understand operator algebraic conditional expectations. The main motivation for the nomenclature comes from the fact that for the von Neumann algebra $M = L^\infty(\Omega, F, P)$ (which is a subset of $L^2(F, P)$), the restriction of the measure theoretic conditional expectation $E = E(\cdot | G)$ to $M$ gives an operator algebraic conditional expectation $E: M \rightarrow L^\infty(G)$.

So the conditional expectation you defined naturally restricts to a completely positive $L^\infty(G)$-bimodular projection from $L^\infty(F)$ onto $L^\infty(G)$. Moreover, every conditional expectation $E: M \rightarrow N$ on a separable abelian algebra $M$ can be realized in this way, in the sense that there's a space $(\Omega, F, P)$ and $G \subset F$ and an isomorphism $\Phi: M \rightarrow L^\infty(F)$ such that $\Phi(E(x)) = E(\Phi(x)|G)$ for all $x\in M$. So, in some sense, conditional expectations on C$^*$-/W$^*$-algebras are the natural non-commutative generalizations of the measure theoretic conditional expectation.

Related Question