[Math] Lower bound for the cumulative distribution function on the left of the mean

geometric-probabilityinequalityprobability distributionsprobability theorystatistics

Let $X$ be a random variable with mean $\mu$ and variance $\sigma^2$. Let $a < \mu$ and consider the probability
$$
F_X(a) = \mathbb{P}(X \leq a) = \mathbb{P}(X – \mu \leq a – \mu).
$$
If $a > \mu$, Cantelli's inequality (see here https://en.wikipedia.org/wiki/Cantelli%27s_inequality) gives a good lower bound
for $F_X(a)$. However is there a good lower bound we can use (in terms of moments) in the case when $a < \mu$? Thanks.

If it is of help note we can also write, for any $\lambda > 0$,
$$
\mathbb{P}(X \leq \mu – \lambda) = \mathbb{P}(X -\mu < \lambda) – \mathbb{P}(-\lambda < X – \mu < \lambda).
$$
To get a lower bound for $\mathbb{P}(X \leq \mu – \lambda)$ we can use Cantelli's lower bound on the first term. But we still need an upper bound for the term $\mathbb{P}(-\lambda < X – \mu < \lambda)$.

Best Answer

If you do not have any further assumption on the values $X$ can take (e.g., is it lower bounded a.s.?), then you cannot get any meaningful lower bound. For any $\varepsilon\in[0,1]$ (and wlog the case $\mu=0$), consider the random variable defined by $$ X = \begin{cases} -x\frac{1-\varepsilon}{\varepsilon} & \text{ w.p. } \varepsilon \\ x & \text{ w.p. } 1-\varepsilon \end{cases} $$ where $x = \sigma\sqrt{\frac{\varepsilon}{1-\varepsilon}}$.

You have $\mathbb{E} X = 0 = \mu$, and $\operatorname{Var} X = \sigma^2$; yet $\mathbb{P}\{ X < \mu\} = \varepsilon$ can be arbitrarily small.


Assuming $X \geq 0$ a.s. (as suggested in a comment below). Even then, one cannot get a non-trivial bound. Namely,

Fix any $\mu> 0$, $\sigma^2\geq 0$. For any $a\in[0,\mu)$, there exists a random variable $X\in L^2$ such that $X\geq 0$ a.s., satisfying $$\mathbb{P}\{ X \leq a\} = 0.$$

Note that up to renormalization by $\mu$ (of the standard deviation and $a$), we can wlog assume $\mu = 1$. For fixed $\sigma,a$ as above, define $\alpha \stackrel{\rm def}{=}\frac{a+1}{2}\in(a,1)$, and let $\beta$ be the solution of the equation $$ \sigma^2 + 1 = \alpha + \beta - \alpha\beta $$ i.e. $\beta = 1+\frac{\sigma^2}{1-\alpha} > 1$.

Let $X$ be the random variable taking values in $\{\alpha,\beta\}$, defined as $$ X = \begin{cases} \alpha &\text{ w.p. } \frac{\beta-1}{\beta-\alpha} \\ \beta &\text{ w.p. } \frac{1-\alpha}{\beta-\alpha} \\ \end{cases} $$ so that indeed $$ \begin{align} \mathbb{E} X &= 1 \\ \operatorname{Var} X &= - \mathbb{E}[X^2]- (\mathbb{E} X)^2 \\ &= \frac{1}{\beta-\alpha}\left(\alpha^2(\beta-1) + \beta^2(1-\alpha)\right)- 1 = \alpha+\beta-\alpha\beta -1 \\ &= \sigma^2. \end{align} $$ $X$ satisfies all the assumptions, and yet $$ \mathbb{P}\{X \leq a\} = \mathbb{P}\{X < \alpha\} = 0. $$

At that point, it looks to me that one would need the assumption that $X$ be bounded to get some interesting lower bound.

Related Question