Below is what I wanted to prove
Let $(X_{k})$ be integrable i.i.d random variables such that $E[X_{k}]=0$. Show that if $c_{n}$ is a bounded sequence of non-random constants, then $$n^{-1}\sum_{k=1}^{n}c_{k}X_{k}\longrightarrow 0\ \text{a.s.}$$
To prove this, we need two lemmas and Kolmogorov's One Series Theorem, which are stated below:
Lemma 1: (Kronecker's Lemma) Let $\{x_{n}\}$ and $\{a_{n}\}$ be two sequences of real numbers such that $a_{n}>0$ and $a_{n}\nearrow\infty$. Show that if $\sum_{n=1}^{\infty}x_{n}/a_{n}$ converges, then $$a_{n}^{-1}\sum_{m=1}^{n}x_{m}\longrightarrow 0.$$
Lemma 2: Show that $\sum_{k=1}^{\infty}k^{-2}Var(X_{k}\mathbb{1}_{(|X_{k}|\leq k)})\leq 4E|X_{1}|.$
Kolmogorov's One Series Theorem: Suppose $X_{1},X_{2},\cdots$ are independent and have $EX_{n}=0$. If $$\sum_{n=1}^{\infty}Var(X_{n})<\infty,$$ then we have $\sum_{n=1}^{\infty}Var(X_{n})$ converges a.s.
I will firstly use these three things to prove my statement, and then I will prove the first two lemmas. For the one series theorem, the proof of it is everywhere.
Proof of Statement:
Set $S_{n}:=c_{1}X_{1}+\cdots+c_{n}X_{n}$, $Y_{k}:=(c_{k}X_{k})\mathbb{1}_{(|c_{k}X_{k}|\leq k)}$, and $T_{n}:=Y_{1}+\cdots+Y_{n}.$
It then suffices to show that $T_{n}/n\longrightarrow 0$. Indeed, since $c_{k}$ is a bounded sequence, there exists $N$ such that $|c_{k}|\leq N$ for all $k$, and thus if $\omega\in\Omega$ satisfied $|X_{k}|>k/|c_{k}|$, it must satisfied $|X_{k}|>k/N$, and thus by monotonicity, we have $$\sum_{k=1}^{\infty}P(|c_{k}X_{k}|>k)\leq\sum_{k=1}^{\infty}P(|X_{k}|>k/N)\leq\int_{0}^{\infty}P(|X_{1}|>t)dt=E|X_{1}|<\infty,$$ which implies that $$P(c_{k}X_{k}\neq Y_{k}\ \text{i.o.})=0,$$ and thus $$|S_{n}(\omega)-T_{n}(\omega)|\leq R(\omega)<\infty\ \text{a.s. for all}\ n.$$
Secondly, glance at the proof of Lemma 2, we know that the inequality can be modified as $$\sum_{k=1}^{\infty}Var(Y_{k}/k^{2})\leq 4|c_{k}|E|X_{1}|<\infty,$$ where it is finite since $c_{k}$ is bounded and $X_{1}$ is integrable.
Now, set $Z_{k}:=Y_{k}-EY_{k}$, so that $E(Z_{k})=0$ and thus $Var(Z_{k})=Var(Y_{k})$. By Lemma 2, we have $$\sum_{k=1}^{\infty}Var(Z_{k})/k^{2}=\sum_{k=1}^{\infty}Var(Y_{k})/k^{2}\leq 4|c_{k}|E|X_{1}|<\infty,$$ and thus by Kolmogorov's One Series Theorem, we conclude that $\sum_{k=1}^{\infty}Z_{k}/k$ converges a.s., so use Lemma 1, we have $$n^{-1}\sum_{k=1}^{n}(Y_{k}-EY_{k})\longrightarrow 0,$$ and thus $$\dfrac{T_{n}}{n}-n^{-1}\sum_{k=1}^{n}EY_{k}\longrightarrow 0\ \text{a.s.}$$
Now, since $|c_{k}|$ is bounded, we can use dominated convergence theorem to get $EY_{k}\longrightarrow 0\ \text{as}\ k\longrightarrow\infty,$ and thus it follows that $$n^{-1}\sum_{k=1}^{n}EY_{k}\longrightarrow 0,$$ and hence $$T_{n}/n\longrightarrow 0.$$
Proof of Lemma 1:
By the setting of this problem, I believe it means that $a_{n}>0$ for $n\geq 1$, so let us set $a_{0}:=0$. For $m\geq 1$, define $b_{m}:=\sum_{k=1}^{m}x_{k}/a_{x}$, and define $b_{0}=0$. Then, $x_{m}=a_{m}(b_{m}-b_{m-1}).$ Therefore, we have (don't forget $a_{0}=b_{0}=0$):
\begin{align*}
a_{n}^{-1}\sum_{m=1}^{n}x_{m}&=a_{n}^{-1}\sum_{m=1}^{n}a_{m}(b_{m}-b_{m-1})\\
&=a_{n}^{-1}\Big(\sum_{m=1}^{n}a_{m}b_{m}-\sum_{m=1}^{n}a_{m}b_{m-1}\Big)\\
&=a_{n}^{-1}\Big(a_{n}b_{n}+\sum_{m=2}^{n}a_{m-1}b_{m-1}-\sum_{m=1}^{n}a_{m}b_{m-1}\Big)\\
&=b_{n}-\sum_{m=1}^{n}\dfrac{(a_{m}-a_{m-1})}{a_{n}}b_{m-1}.
\end{align*}
By hypothesis, $b_{n}\longrightarrow b_{\infty}<\infty$ as $n\longrightarrow\infty$. It then remains for us to show that $$\sum_{m=1}^{n}\dfrac{(a_{m}-a_{m-1})}{a_{n}}b_{m-1}\longrightarrow b_{\infty}.$$
Let $\epsilon>0$, set $B:=\sup|b_{n}|$, and choose an $M$ such that for all $m\geq M$, we have $|b_{m}-b_{\infty}|<\epsilon/2$. Also, pick an $N$ such that $a_{M}/a_{n}<\epsilon/4B$ for all $n\geq N$.
Now, for $n\geq N$, recalling $a_{m}- a_{m-1}\geq 0$, and noting $\sum_{m=1}^{n}\dfrac{(a_{m}-a_{m-1})}{a_{n}}=1,$ we have:
\begin{align*}
\Big|\sum_{m=1}^{n}\dfrac{(a_{m}-a_{m-1})}{a_{n}}b_{m-1}-b_{\infty}\Big|&\leq\sum_{m=1}^{n}\dfrac{(a_{m}-a_{m-1})}{a_{n}}|b_{m-1}-b_{\infty}|\\
&\leq \dfrac{a_{M}}{a_{n}}\cdot 2B+\dfrac{a_{n}-a_{M}}{a_{n}}\cdot\dfrac{\epsilon}{2}\\
&<\epsilon.
\end{align*}
The result follows immediately by taking $\epsilon\searrow 0$.
Proof of Lemma 2:
Set $Y_{k}:=X_{k}\mathbb{1}_{(|X_{k}|\leq k)}$.
Firstly, we observe that $$Var(Y_{k})\leq E[Y_{k}^{2}]=\int_{0}^{\infty}2yP(|Y_{k}|>y)dy\leq\int_{0}^{k}2yP(|X_{1}|>y)dy,$$ then since everything is $\geq 0$ and the sum is just an integral with respect to counting measure on $\{1,2\cdots\}$, we can use Fubini's Theorem to yield:
\begin{align*}
\sum_{k=1}^{\infty}\dfrac{E(Y_{k}^{2})}{k^{2}}&\leq\sum_{k=1}^{\infty}k^{-2}\int_{0}^{\infty}\mathbb{1}_{(y<k)}2yP(|X_{1}|>y)dy\\
&=\int_{0}^{\infty}\Big(\sum_{k=1}^{\infty}k^{-2}\mathbb{1}_{(y<k)}\Big)2yP(|X_{1}|>y)dy
\end{align*}
Now, since $E|X_{1}|=\int_{0}^{\infty}P(|X_{1}|>y)dy$, the result follows immediately from the lemma below.
Lemma 2.1: If $y\geq 0$, then $2y\sum_{k>y}k^{-2}\leq 4$.
Proof of Lemma 2.1:
Note that if $m\geq 2$, we then have $$\sum_{k\geq m}k^{-2}\leq\int_{m-1}^{\infty}x^{-2}dx=(m-1)^{-1}.$$
Now, when $y\geq 1$, the sum starts at $k=[y]+1\geq 2$. Also, note that $y/[y]\leq 2$ for $y\geq 1$, since the worst case is $y$ being really close to $2$. Thus, we have $$2y\sum_{k>y}k^{-2}\leq 2y/[y]\leq 4.$$
Finally, for $0\leq y<1$, we have $$2y\sum_{k>y}k^{-2}\leq 2\Big(1+\sum_{k=2}^{\infty}k^{-2}\Big)\leq 4.$$
The whole proof now is complete. Please let me know if there is any problem in my proof. Enjoy :)
Best Answer
In my edition of the book it reads
$$\lim_{n \to \infty} \sup_{\color{red}{u \leq u_0} } \left| \frac{Y(nu)}{n}-u \right|=0 \quad \text{a.s.}$$
So let's prove this. Fix $\varepsilon>0$. For any $n \in \mathbb{N}$ we have by Etemadi's inequality
$$p_n := \mathbb{P} \left( \sup_{u \leq u_0} \left| \frac{Y(nu)}{n}-u\right| > 3\varepsilon \right) \leq 3\sup_{u \leq u_0} \mathbb{P} \left( \left| \frac{Y(nu)}{n}-u\right|>\varepsilon \right). \tag{1}$$
The idea is to show that $$\sum_{n \in \mathbb{N}} p_n<\infty; \tag{2}$$ the claim then follows from the Borel-Cantelli lemma. In order to prove $(2)$ we note that we can choose a constant $C>0$ such that for any $|\lambda| \leq 1$
$$\mathbb{E}e^{\lambda \tilde{Y}_t} \leq e^{Ct \lambda^2}, \tag{3}$$
where $\tilde{Y}_t :=Y_t-t$ denotes the compensated Poisson process; see the lemma below. The (exponential) Markov inequality and $(1)$ then shows
$$\begin{align*} p_n &\leq 3\sup_{u \leq u_0} \mathbb{P} \bigg( Y(nu)-nu>\varepsilon n\bigg)+3\sup_{u \leq u_0} \mathbb{P} \bigg( -(Y(nu)-nu)>\varepsilon n \bigg) \\ &\leq 3 \sup_{u \leq u_0} \bigg[ \exp \left(\lambda \tilde{Y}(nu)-\varepsilon n \lambda\right)+ \exp \left(-\lambda \tilde{Y}(nu)-\varepsilon n \lambda\right) \bigg]. \end{align*}$$
If we choose $\lambda=\frac{1}{\sqrt{n}}$ and apply $(2)$, then we get
$$p_n \leq 6 \exp \left( C u_0-\varepsilon \sqrt{n} \right).$$
Obviously, this entails $(2)$.
Proof: Since $Y_t \sim \text{Poi}(t)$, the exponential moments can be calculated explicitely: $$\mathbb{E}e^{\lambda Y_t} = e^{t \cdot (e^{\lambda}-1)}.$$ Hence, $$\mathbb{E}e^{\lambda \tilde{Y}_t} = e^{t \cdot (e^{\lambda}-1-\lambda)}.$$ For $\lambda \in [-1,1]$, we have $$|e^{\lambda}-1-\lambda| \leq C \cdot \lambda^2$$ and this proves $(3)$.
Remark The claim holds for any Lévy process $(Y_t)_{t \geq 0}$ with finite exponential moments:
$$\lim_{n \to \infty} \sup_{u \leq u_0} \left| \frac{Y(nu)}{n}-\mathbb{E}Y_1 \cdot u \right|=0 \quad \text{a.s.}$$