Show a convergence in probability using Markov inequality

convergence-divergenceprobabilityprobability theory

Let $(X)_{j\in \mathbb Z}$ be a discrete random process such that
$$X_{j}= \theta X_{j-1}+ \epsilon_j, \quad (\epsilon_j) \overset{iid}{\sim} N(0,1), |\theta|<1 $$
How to show that

$$P\left(\max_{1\leq j \leq n} |X_j| \geq n^{3/4} \right)\to 0, \quad (n \to \infty)$$

I'm a little unsure about doing the following and I still don't know how to complete it.

$$P\left(\max_{1\leq j \leq n} |X_j| \geq n^{3/4} \right)= P\left(\left[\max_{1\leq j \leq n} |X_j|\right]^2 \geq n^{3/2} \right)\leq \frac{E\left( \left[\max_{1\leq j \leq n} |X_j|\right]^2 \right)}{n^{3/2}}$$

or

$$P\left(\max_{1\leq j \leq n} |X_j| \geq n^{3/4} \right)= P\left(\left[\max_{1\leq j \leq n} |X_j|\right]^2 \geq n^{3/2} \right)= P\left(\max_{1\leq j \leq n} \{|X_j|^2\} \geq n^{3/2} \right) \leq \frac{E\left( \max_{1\leq j \leq n} \{|X_j|^2\} \right)}{n^{3/2}}$$

In both cases, I use the Markov inequality.

Which alternative is right and how to conclude?

I know that in both cases it would suffice to show that the last Expectation is finite. How to justify this?

Best Answer

EDIT: My previous approach was flawed, as @Snoop pointed out. I will keep my original "solution" in this answer further below for context. Instead i will detail the approach using Kolmogorov's maximal inequality here:

Solution:

Note that the process $X_n$ can be written as $$X_n=\theta X_{n-1}+\epsilon_n=\theta(\theta X_{n-2}+\epsilon_{n-1})+\epsilon_n=\dots=\theta^n X_0+\sum_{i=0}^n \theta^{n-i}\epsilon_i$$ Hence we define the process $$S_n=\sum_{i=0}^n\theta^{n-i}\epsilon_i.$$ As @Karthik P N mentioned, due to the process $S_n$ being a sum of independent random variables, Kolmogorov's maximal inequality yields a straightforward approach to bounding the tails of the running maximum of $S_n$. In short, it immediately implies the bound $$\mathbb P\bigg[\max_{i\leq n}|S_i|\geq n^{3/4}\bigg]\leq n^{-3/2}\cdot\sum_{i=1}^n\text{Var}\bigg(\theta^i\epsilon_i\bigg)=n^{-3/2}\cdot\sum_{i=1}^n\theta^{2i}\xrightarrow[n\to\infty]{}0$$

Previous Attempt (which is not quite correct):

When trying to derive bounds on the maximum of a stochastic process, it is helpful to look for an underlying martingale to then use Doob's Martingale Inequality.

Now the process $$\widetilde S_n:=\sum_{i=0}^n \theta^i \epsilon_i$$ is a square-integrable martingale (here was my error: while it is correct that $\widetilde S_n$ is a square-integrable martingale, it is not the same process as $S_n$ defined above. In particular, $S_n$ is not a martingale, since $\mathbb E[S_{n+1}|\mathcal F_n]=\theta S_n\neq S_n$).

Note that we have $$\mathbb P\bigg[\max_{i\leq n}|X_i|\geq n^{3/4}\bigg]\leq \mathbb P\bigg[ |X_0|\geq \frac12 n^{3/4}\bigg]+\mathbb P\bigg[\max_{i\leq n}\bigg| \sum_{i=0}^n\theta^i\epsilon_i\bigg|\geq \frac12 n^{3/4}\bigg]$$ Clearly the first summand goes to $0$. For the second summand, we may now use Markov's inequality together with Doob's martingale inequality with $p=2$ to obtain $$\mathbb E\bigg[\max_{i\leq n}\bigg| \sum_{i=0}^n\theta^i\epsilon_i\bigg|^2\bigg]\leq 4\cdot \mathbb E\bigg[\bigg| \sum_{i=0}^n\theta^i\epsilon_i\bigg|^2\bigg]= 4\cdot\sum_{i=0}^n \theta^{2i}$$

Related Question