Upper bound expectation by integrating tail bound

inequalitymachine learningprobabilityupper-lower-bounds

For a positive random variable $X$ and all $\delta \geq 0$, I have a tail bound of the form:
$$\mathbb{P}(X > a + b\delta) \leq e^{-\delta}$$
where $a, b> 0$.

I want to upper bound $\mathbb{E}[X]$. Usually I would use the following identity for positive random variables:

$$\mathbb{E}[X] = \int_0^\infty \mathbb{P}(X > x)dx$$

But of course we can't directly apply this since our bound on the tail includes an offset term $a$. If I knew that $X > a$ a.s., then I could instead apply the identity to $(X-a)/b$ instead, but I don't have any such guarantee.

This question comes from following the proof of Theorem 8.3 from [1] after the tail bound is obtained, they claim the bound on the expectation is obtained by integrating. (see page 301 of this pdf for the statement of the theorem and page 305 for the claim: https://www.researchgate.net/profile/Pascal_Massart/publication/245759642_Concentration_Inequalities_and_Model_Selection/links/540ee8990cf2df04e758a212/Concentration-Inequalities-and-Model-Selection.pdf).

[1] Massart, Pascal. Concentration inequalities and model selection. Vol. 6. Berlin: Springer, 2007.

Best Answer

As suggested by @Exodd, \begin{align}EX&=\int_0^\infty P(X>x)dx\\ &=\int_0^a P(X>x)dx+\int_a^\infty P(X>x)dx\\ &=\int_0^a P(X>x)dx+b\int_0^\infty P(X>a+b\delta)d\delta\\ &\le \int_0^a 1dx+b\int_0^\infty e^{-\delta}d\delta\\ &=a+b. \end{align}

Related Question