Need help developing correct intuition about the tail sum formula for continuous random variable.

expected valueprobability

For discrete random variable:
$$E(X)=\sum_{k=1}^{\infty}P(X\ge k)=\sum_{k=0}^{\infty}P(X>k)$$I have intuition of where it comes from since:
$P(X= 1)$.
$P(X= 2) + P(X= 2)$.
$P(X= 3) + P(X= 3) + P(X= 3)$.

If we sum them "Vertically", then we would see that it is the same as the above formula.

For continuous random variable:
We were presented with the following formula:
$$E(X)=\int_0^{\infty}(1-F_X(x))dx-\int_{-\infty}^0F_X(x)dx$$
I understand that it was presented to us this way, so we could use the first part if we know that $X$ is a positive random variable, but I couldn't quite get the idea behind the formula itself, The first integral looks like the one with the discrete random variable, since $1-F_X(x)=P(X>x)$, why we need to minus the second integral? why we didn't do it in the discrete formula?

Thanks in advance for any help!

Best Answer

As per the continous formula you corrected posted this is the geometrical explanation

enter image description here

The expectation is the purple area minus the yellow one.

To verify this, simply integrate by parts you expression finding that

$$\int_0^{\infty}\left[1-F_X(x)\right]dx-\int_{-\infty}^0F_X(x)dx=\int_{-\infty}^{\infty}x f_X(x)dx=\mathbb{E}[X]$$

Similar reasoning can be applied to a discrete r.v.


integration by parts

$$I_1=\int_0^{\infty}\left[1-F_X(x)\right]dx=\underbrace{\left.x(1-F_X) \right]_{0}^{-\infty}}_{=0}+\int_0^{\infty}xf_X(x)dx$$

$$I_2=-\int_{-\infty}^0F_X(x)dx=-\left[\underbrace{\left.xF_X \right]_{-\infty}^{0}}_{=0}-\int_{-\infty}^{0}xf_X(x)dx\right]=\int_{-\infty}^0 xf_X(x)dx$$

Related Question