Probability – Convergence of Row Sums in Triangular Null Array with Zero Mean

limits-and-convergencepr.probabilityprobability distributions

Let $(X_{jn})_{1\leq j \leq n}$, $n\in \mathbb N$, be a triangular array of random vectors in $\mathbb R^d$ (the $X_{jn}$ are understood to be independent in $j$ for fixed $n$.). We say that the triangular array is null if $X_{jn} \overset{p}{\to} 0$, as $n \to \infty$, uniformly in $j$, i.e.,:
\begin{equation}
\lim_{n \to \infty} \max_{1\leq j\leq n} P( \, |X_{jn}|> \epsilon \, )=0, \quad (\forall \, \epsilon>0).
\end{equation}

For any random vector $X$ with distribution $\mu$, we introduce an associated compound Poisson random vector $X^{\tilde{}}$ (Note the tilde, please) with characteristic measure $\mu$, i.e.:
\begin{equation}
\log \varphi_{X^{\tilde{}}}(u) = \int_{\mathbb R^d} (e^{iux}-1)\mu (dx), \quad u \in \mathbb R^d.
\end{equation}

For a triangular array $(X_{jn})_{1\leq j \leq n}$, the corresponding compound Poisson vectors $(X_{jn}^{\tilde{}})_{1\leq j \leq n}$ are again assumed to have row-wise independent entries.

By $X_n \overset{d}{\sim} Y_n$ we mean that, if either side converges in distribution along a sub-sequence, then so does the other along the same sequence.

We can show the proposition about compound Poisson approximation: Let $(X_{jn})_{1\leq j \leq n}$, $n\in \mathbb N$, be a null array of random vectors in $\mathbb R^d$. Fix $h > 0$, and define:
\begin{equation}\label{k1}\tag{K1}
b_{jn}= E(X_{jn}\,;\,|X_{jn}|\leq h).
\end{equation}

Then
\begin{equation}\label{k2}\tag{K2}
\sum_j X_{jn} \overset{d}{\sim} \sum_j \left\{ (X_{jn} – b_{jn})^{\tilde{}} +b_{jn} \right\}
\end{equation}

(This the Proposition 7.11 from Foundations of Modern Probability, Third edition)

Update

Allow me to rephrase my question. We say that a Infinite Divisible r. vector has the levy kintchine representations if its characteristic function is given by:
$$\varphi_X(u)= \exp\left\{i u'b + \frac{1}{2}u'au + \int_{\mathbb R^d} \left[e^{iu'x}-1 – i u'x c(x)\right] d\nu(x) \right\} $$
where $c(x)$ is a integrable function. We denote this as $X \sim (b_c, a, \nu)_c$. In the case above, we have that $X^{\tilde{}} \sim (0,0,\mu)_0$.

Now, we can change the truncation function $c(x)$ by other $h(x)$. If $X \sim (b_c, a, \nu)_c$, we have that
$$X \sim (b_h, a, \nu)_h, \quad b_h = b_c + \int_{\mathbb R^d}x [ h(x)- c(x)]d\nu(x)$$

Question:

Given a Null triangular array $(X_{jn})_{1\leq j \leq n}$ with $X_{jn} \sim \mu_{jn}$. Suppose $E[X_{jn}]=0$ and we also have that:
\begin{equation}\label{I}\tag{I}
\sum_{j=1}^n \mathbf{v}(X_{jn})\leq C< \infty,\quad \forall \, n
\end{equation}

where $X_{jn}=(X_{jn_{1}},…, X_{jn_{d}})$ and:
$$\mathbf{v}(X_{jn}):=\hbox{trace} \left(E\left[X_{jn}X_{jn}^{\,\,'}\right]\right)= \sum_{k=1}^d \hbox{var}(X_{jn_{k}})$$
Note that $X_{jn_{k}}$ is unidemsional, $E[X_{jn_{k}}]=0$ and $\hbox{var}(X_{jn_{k}})= E[(X_{jn_{k}})^2]$.

Define $S_n' \sim (0,0,\mu_n)_h$ with $h(x)\equiv 1$ and $$\mu_n := \sum_{j=1}^n \mu_{jn}$$
So, how to show, using (\ref{k1}) and (\ref{k2}), that:
$$S_n := \sum_j X_{jn} \overset{d}{\sim} S_n' $$

Best Answer

$\newcommand{\R}{\mathbb R}\newcommand{\ep}{\varepsilon}$Let us rephrase the question a bit. For each natural $n$, let $X_{1,n},\dots,X_{j_n,n}$ be independent zero-mean random vectors in $\R^d$ such that (i) for each real $\ep>0$ \begin{equation*} \max_{j\in J_n} P(\|X_{j,n}\|>\ep)\to0 \tag{0}\label{0} \end{equation*} (as $n\to\infty$) and (ii) for some real $C>0$ and all $n$ \begin{equation*} \sum_{j\in J_n} E\|X_{j,n}\|^2\le C, \tag{10}\label{10} \end{equation*} where $j_n$ is a positive integer, $J_n:=\{1,\dots,j_n\}$, and $\|\cdot\|$ denotes the Euclidean norm. Let \begin{equation*} S_n:=\sum_{j\in J_n} X_{j,n}. \end{equation*}

For any random vector $Z$ in $\R^d$, let $f_Z$ denote the characteristic function (ch. f.) of $Z$, so that $f_Z(t)=Ee^{it\cdot Z}$ for $t\in\R^d$, where $\cdot$ denotes the dot product.

For each natural $n$, let $Y_{1,n},\dots,Y_{j_n,n}$ be independent random vectors in $\R^d$ such that for all $j\in J_n$ and all $t\in\R^d$ \begin{equation*} f_{Y_{j,n}}(t)=\exp(f_{X_{j,n}}(t)-1). \end{equation*} Let \begin{equation*} T_n:=\sum_{j\in J_n} Y_{j,n}. \end{equation*}

The problem is then to show that \begin{equation*} \text{$S_n$ converges in distribution iff $T_n$ converges in distribution.} \tag{20}\label{20} \end{equation*}

Here is a proof. Take any $t\in\R^d$. Note that $|e^{iu}-1|\le\min(2,|u|)$ for all real $u$. So, for each real $\ep>0$, \begin{equation*} |f_{X_{j,n}}(t)-1|\le E|e^{it\cdot X_{j,n}}-1|\,1(\|X_{j,n}\|\le\ep) +E|e^{it\cdot X_{j,n}}-1|\,1(\|X_{j,n}\|>\ep) \le\|t\|\ep+2P(\|X_{j,n}\|>\ep) \end{equation*} and hence, by \eqref{0}, \begin{equation*} \limsup_n\max_{j\in J_n}|f_{X_{j,n}}(t)-1|\le\|t\|\ep. \end{equation*} So, \begin{equation*} \max_{j\in J_n}|f_{X_{j,n}}(t)-1|\to0\text{ uniformly in $t$ in any bounded set}. \tag{30}\label{30} \end{equation*}

Note also that $|e^{iu}-1-iu|\le u^2/2\le u^2$ for all real $u$. So, \begin{equation*} |f_{X_{j,n}}(t)-1|=|E(e^{it\cdot X_{j,n}}-1-it\cdot X_{j,n})|\le E(t\cdot X_{j,n})^2 \le\|t\|^2 E\|X_{j,n}\|^2. \end{equation*} So, by \eqref{10}, \begin{equation*} \sum_{j\in J_n}|f_{X_{j,n}}(t)-1|\le C\|t\|^2. \tag{40}\label{40} \end{equation*}

It follows from \eqref{30} that for all large enough $n$ (depending on $t$) and all $j\in J_n$ the value of $\ln f_{X_{j,n}}(t)$ is defined and \begin{equation} |\ln f_{X_{j,n}}(t)-(f_{X_{j,n}}(t)-1)|\le|f_{X_{j,n}}(t)-1|^2. \end{equation} So, by \eqref{40} and \eqref{30}, \begin{equation} \Big|\sum_{j\in J_n}\ln f_{X_{j,n}}(t)-\sum_{j\in J_n}(f_{X_{j,n}}(t)-1)\Big| \le\sum_{j\in J_n}|f_{X_{j,n}}(t)-1|^2 \\ \le\max_{j\in J_n}|f_{X_{j,n}}(t)-1|\,\sum_{j\in J_n}|f_{X_{j,n}}(t)-1| \le C\|t\|^2\max_{j\in J_n}|f_{X_{j,n}}(t)-1|\to0. \tag{50}\label{50} \end{equation}

Thus, in view of \eqref{50} and \eqref{30},

$S_n$ converges in distribution

iff $\prod_{j\in J_n}f_{X_{j,n}}$ converges pointwise to a function continuous at $0$

iff $\sum_{j\in J_n}\ln f_{X_{j,n}}$ converges pointwise to a function continuous at $0$

iff $\sum_{j\in J_n}(f_{X_{j,n}}-1)$ converges pointwise to a function continuous at $0$

iff $T_n$ converges in distribution.

So, we have \eqref{20}. $\quad\Box$