Maximal type inequality for sum (or average) of i.i.d. random variables

probability theoryreal-analysis

Let $Z_i$ be i.i.d. random variables with $\mathbb{E}[Z_i] = 0$ and $\mathbb{E}|Z_i|^p< \infty$ for $p=1,2,3,\cdots$. I am looking for the following type of estimate if possible, and it is not like the concentration inequalities that one normally sees.

There exists $N_0$ sufficiently large and $t_0$ sufficiently small
such that for all $N\geq N_0$ and $1/N<t\leq t_0$, we have $$\mathbb{P}
\left\{\max_{1 \leq k \leq N} \left( \frac{1}{k}\sum_{i=1}^k Z_i
\right)\leq t \right\} \leq C t^\alpha$$
or equivalently $$\mathbb{P}
\left\{\max_{1 \leq k \leq N} \sum_{i=1}^k Z_i
– tk \leq 0 \right\} \leq C t^\alpha.$$

(I know the distributions of $Z_i$'s, if this is helpful).

Is there a name for this type of inequality where we look at the maximum of the averages (or the sum of i.i.d. random variables but we can not move the constant to the other side, like in $\star$ above).

I found a related general results in this paper by Chung; here the mean zero random variables are only assumed to be independent. With his notation, $S_n^* = \max_{1\leq k\leq n} |S_n|$, and $s_n = \text{Var}[S_n]$ which is $Cn$ in the i.i.d. case, we have

Theorem 2. If $g_n \downarrow 0$ and
$$g_n^{-1} = O((\log_2 s_n)^{1/2})$$
then we have
$$\mathbb{P}(S_n^* < g_ns_n) = (1+o(1)) \exp\left(-\frac{\pi^2}{8g_n^2}.\right)$$

Is there a simpler inequality of this type for i.i.d. random variables? The proof of this inequality in his general setting is quite technical.

Background:
The original event that I was trying to estimate is
$$\left\{\inf_{1\leq k \leq tN} \sup_{tN
\leq l \leq N}\sum_{i=k+1}^l X_i – Y_i \leq 0\right\}$$

where $X_i \sim \exp(\rho)$, and $Y_i \sim \exp(\rho- t)$ all independent of each other.

Like Kolmogorov or Doob's maximal inequality, maybe it is helpful to center the random variables; by defining
$Z_i = X_i – Y_i – \mathbb{E}[X_i – Y_i] $, we get the centered version
$$\left\{\inf_{1\leq k \leq tN} \sup_{tN
\leq l \leq N}\sum_{i=k+1}^l \left(Z_i – \frac{t}{\rho(\rho-t)} \right) \leq 0 \right\},$$

and this boils down to estimate $$\mathbb{P} \left\{\inf_{1\leq k \leq tN} \sup_{tN
\leq l \leq N} \left( \frac{1}{l-k}\sum_{i=k+1}^l Z_i \right)\leq t
\right\} \leq C t^\alpha$$

for some positive $C, \alpha$.

Final remark:
One way to get some kind of tail estimate is to go to Brownian motion using Donsker's theorem, and we could obtain
$$\limsup_{N\rightarrow \infty} \mathbb{P} \left\{\inf_{1\leq k \leq tN} \sup_{tN
\leq l \leq N} \left( \frac{1}{l-k}\sum_{i=k+1}^l Z_i \right)\leq t
\right\} \leq C t^\alpha$$
for all $t\in (0, t_0)$. In this case, the $N_0$ would be dependent on $t$ so instead of $“N\geq N_0"$ we have to use $“\limsup_N"$, and I am trying to avoid this.

Best Answer

For people who are interested, this question has been answered here on mathoverflow by fedja.