[Math] Conditions for convergence of moments given uniform convergence of distribution functions

convergence-divergenceprobabilityprobability distributionsprobability theoryuniform-convergence

Setup: Let $S_n = n^{-1} \sum_{i=1}^n X_n$ denote a sample mean and let $S_n^*$ denote a stationary bootstrap re-sample of $S_n$. Let $F_n(x)$ denote the cumulative distribution function of $\sqrt{n} S_n$ and let $F_n^*(x)$ denote the cdf of $\sqrt{n} (S_n^* – S_n)$; thus $F_n^*(x)$ is conditional on $X_1, …, X_n$. Assume $X_n$ is near-epoch-dependent on strong mixing base; assume $\mathbb{E} |X_n|^{2+\delta}$ is finite for some $\delta > 0$; and assume $\mathbb{E} X_n = 0, \forall n$. Then the standard stationary bootstrap result holds, ie:

\begin{equation}
\sup_{x \in \mathbb{R}} |F_n(x) – F_n^*(x)| \overset{\mathbb{P}}{\rightarrow} 0, \: \mathrm{as} \: n \rightarrow \infty
\end{equation}

Question: What (if any) additional conditions are needed for:

\begin{equation}
\left| \int_a^b x^{1+\alpha} dF_n^*(x) – \int_a^b x^{1+\alpha} dF_n(x) \right| \overset{\mathbb{P}}{\rightarrow} 0, \: \mathrm{as} \: n \rightarrow \infty ,
\end{equation}

for some $\alpha > 0$, and some arbitrary choice of $a < b$ (possibly $a = -\infty$ and $b = \infty$)?

In words, given uniform convergence in probability of two cdfs (one of them conditional), what additional conditions are necessary to be certain that arbitrary moments also converge in probability?

Additional Information 1: I asked a very closely related question on CrossValidated here, and was informed (I think) that the moments do converge. However, the answer did not offer any proof, other than an oblique reference to the uniform convergence theorem in the comments, which actually raised more questions for me than it answered.

Additional Information 2: I'm fairly sure the result follows if the convergence in the setup is strengthened to almost sure, see Xiong and Li (2008) "Some Results on the Convergence of Conditional Distributions". However, I'm specifically interested in the case of convergence in probability.

Best Answer

This addresses the case where $a$ and $b$ are fixed and finite.

Throughout, let $V(\cdot)$ denote the total variation of a function. The domain should be clear from context.

Lemma: Suppose $f$,$F$, and $G$ are functions of bounded variation on $[a,b]$ and suppose $|F(x)-G(x)|\le\epsilon$ for all $x\in[a,b]$. Let $M= \sup_{x\in[a,b]} |f(x)|$. Then $$ \left|\int_a^b f(x) dF(x) - \int_a^b f(x) dG(x)\right|\le \left(2M + V(f)\right)\epsilon. $$

Proof Integrate by parts: \begin{eqnarray} \int_a^b f(x) dF(x) - \int_a^b f(x) dG(x) &=& f(b)\left(F(b)-G(b)\right) - f(a)\left(F(a)-G(a)\right) \\ & & \ \ - \int_a^b F(x) - G(x)\ df(x). \end{eqnarray} Bounding each of the three terms on the right separately gives the result.

Corollary: Let $f:\mathbb{R}\to\mathbb{R}$ be a function of bounded variation. Suppose $F_n$ is a sequence of cdfs and $F_n^*$ is a sequence of random cdfs (on some underlying probability space such that the expressions below make sense). If $$ \sup_{x\in\mathbb{R}} |F_n(x)-F_n^*(x)| \overset{\mathbb{P}}{\rightarrow} 0, $$ then $$ \int f(x) dF_n(x) - \int f(x) dF_n^*(x) \overset{\mathbb{P}}{\rightarrow} 0. $$

Proof: Since $f$ has bounded variation, it is bounded. Let $M$ be such that $|f(x)|\le M$ for all $x$. Fix $\epsilon>0$. Let $N$ be such that $n>N$ implies $$ \mathbb{P}\left(\sup_{x\in\mathbb{R}} |F_n(x) - F_n^*(x)|<\frac{\epsilon}{2M+V(f)}\right) > 1-\epsilon. $$ From the lemma, for all $n>N$, $$ \mathbb{P}\left(\left|\int f(x)\ dF_n(x) - \int f(x)\ dF_n^*(x)\right|<\epsilon\right) > 1-\epsilon. $$ The result follows.

Comment: The integrals here are over $\mathbb{R}$, but one can take $$ f(x) = \begin{cases} x^{1+\alpha} \text{ if $x\in[a,b]$}\\ 0 \text{ if $x\notin[a,b]$} \end{cases} $$ for the original question.