Conditional probability in Galton-Watson

probabilityprobability theory

In discussing Galton-watson chains one considers, for a given generation $n$, iid $\nu$-distributed random variables $x_1^n,\ldots,x_k^n$, where $\nu$ is a probability measure on $\mathbb{N}$. At some point in the discussion the following equality is used for a fixed positive integer $k$
$$E[\mathbb{1}_{x_1^n+\cdots+x_k^n=k}|\sigma(x_2^n,\ldots,x_n^n)]=\nu(k-x_2^n + \ldots +x_n^n)$$
and I am failing to understand why this equality holds. This is not really specific to Galton Watson, it was just to give a reference of where this comes from.

Best Answer

An argument based on conditional distributions goes as follow. Let $(P_x)$ be a regular conditional distribution of $Y$ given $X$. Then \begin{align*} \mathbb{E}[h(X,Y) \, | \, X = x] = \int h(x,y) \, P_x(\mathrm{d}y) \end{align*} cf. Theorem 2.2.3 in the following excellent lecture notes: http://web.math.ku.dk/noter/filer/beting.pdf. (Similar results also appear in standard textbook references on the subject.) If $X$ and $Y$ are independent, then one may take $P_x$ to be $Y(\mathbb{P})$, and \begin{align*} \mathbb{E}[h(X,Y) \, | \, X = x] = \int h(x,y) Y(\mathbb{P})(\mathrm{d}y) = \mathbb{E}[h(x,Y)]. \end{align*} Thus $\mathbb{E}[h(X,Y) \, | \, X] = \mathbb{E}[h(x,Y)]\rvert_{x=X}$. Alternatively, one may show that the latter expression satisfies the conditions of being the desired conditional expectation.

The result now follows by choosing $X$, $Y$, and $h$ appropriately, as we discussed in the comments.

Related Question