Solved – What’s the convergence rate in the context of convergence in probability

asymptoticsconvergenceprobability

A sequence $z_n$ with $\lim_n z_n = z$ is said to have $Q$-linear convergence if a constant $r\in (0,1)$ exists such that

$\displaystyle |z_{n+1} – z| \leq r \, |z_n – z|$,

where $r$ is called the rate of convergence.

My question is if this notion of rate of convergence can be applied to a sequence of random variables (RV) $X_n$ converging in probability to a RV $X$:

$\lim_n P(|X_n – X| \geq \varepsilon) \to 0, \quad \forall \varepsilon > 0.$

Assume that we know that

$\displaystyle P(|X_{n+1} – X| \geq \varepsilon) \leq r_\varepsilon \, P(|X_n – X| \geq \varepsilon),\quad \forall \varepsilon > 0.$

Here $r_\varepsilon \in (0,1)$, is a non-decreasing function of $\varepsilon$, with $\lim_{\varepsilon \to 0^+} r_\varepsilon = 1.$

Is it OK to say that $r_\varepsilon$ is a ($\varepsilon$-dependent) rate of convergence in probability for $X_n$?

More generally, I would like to know if there a widely accepted definition of rate of convergence for convergence in probability, and if so, what would be the definition be?

Best Answer

I would argue that the most widely accepted definition of a convergence rate uses the "big-Oh" and "small-oh" notation.

That is, convergence in probability is written as $z_n-z=o_p(1)$, while a rate of convergence could be indicated via a statement like $z_n-z=O_p(n^{-\alpha})$, which says that $z_n-z$ remains stochastically bounded even when divided by $n^{-\alpha}$, or multiplied by $n^{\alpha}$. A leading case is $\alpha=1/2$. Hence, $z_n-z$ must vanish (converge) at rate $n^{\alpha}$.

Of course, you are right that not all convergence rates need to directly relate to $n$. In nonparametric estimation, it is for example of interest to see what happens to the bias of a density estimator as the underlying bandwidth $h\to0$.

Related Question