[Math] What does “taking expectation w.r.t some random variable” mean in this probability calculation

expectationprobabilityprobability distributionsprobability theorystochastic-processes

I am trying to calculate the following probability

$$\mathbb{P} \big(\sum_{i=1}^{m} (A_i + S_i) \le L < \sum_{i=1}^{m+1} (A_i + S_i) \big)$$

where,

$$A_i \sim \exp(\lambda), \quad S_i \sim \exp(\mu), \quad L \sim \exp(\lambda), \quad \textrm{and }\lambda \neq \mu \textrm{ are two integers.}$$

All $A_i, S_i, L$ are mutually independent. $m$ is an integer parameter.

For convenience, let
$$\sum_{i=1}^{m} (A_i + S_i) \triangleq R_m, \quad \sum_{i=1}^{m+1} (A_i + S_i) \triangleq R_{m+1}.$$


I got the following hint for a startup:

"As $L$ is $\exp(\lambda)$ and independent of $(R_m, R_{m+1})$, taking expectation with respect to $L$ gives (also due to Fubini's theorem)"
$$\mathbb{P}(R_m \le L < R_{m+1}) = \mathbb{E}(e^{-\lambda R_m} – e^{-\lambda R_{m+1}}).$$

I am confused about this step and not able to figure out where Fubini's theorem comes into play.

Question: Could you please give me more hints for the calculation of the above probability?


Background of the probability:

Consider an alternating renewal process, in which a system can be in one of two states: on or off. Whenever it is off, it takes a time $A_i \sim \text{exp}(\lambda)$ before turning on. Whenever it becomes on, it also remains on for a time $S_i \sim \text{exp}(\lambda)$.

Initially the system is off. A cycle $c_i$ consists of the $i$-th off-state and the $i$-th on-state. All $A_i$ and $S_i$ are mutually independent.

The question is: Given a time period $L \sim \text{exp}(\lambda)$, what is the probability that exactly $m$ cycles occur in $L$?

Best Answer

To simplify, consider only two independent random variables $L$ and $R$, then you are asking why $$P(R\leqslant L)=E(G(R)),$$ where the function $G$ is defined by $$G(x)=P(L\geqslant x).$$ Indeed, the reason is Tonelli theorem (the version of Fubini theorem for nonnegative functions) since $$P(R\leqslant L)=\iint\mathbf 1_{x\leqslant y}\,\mathrm dP_{(R,L)}(x,y).$$ By hypothesis, $R$ and $L$ are independent hence $P_{(R,L)}=P_R\otimes P_L$, in particular, $$P(R\leqslant L)=\iint\mathbf 1_{x\leqslant y}\,\mathrm dP_R(x)\mathrm dP_L(y),$$ and, by Tonelli theorem, $$P(R\leqslant L)=\int\left(\int\mathbf 1_{x\leqslant y}\,\mathrm dP_L(y)\right)\mathrm dP_R(x).$$ By definition of $P_L$, each inner parenthesis is $P(L\geqslant x)=G(x)$, hence, by definition of $P_R$, $$P(R\leqslant L)=\int G(x)\mathrm dP_R(x)=E(G(R)).$$ Likewise, $$P(R\leqslant L)=E(H(L)),\qquad H(y)=P(R\leqslant y).$$

Related Question