In general, to show that $P(\limsup_n Y_n=M)=1$ for random variables $Y_n$, you need to show two things:
For all $\epsilon>0$, $P(Y_n>M+\epsilon \text{ infinitely often})=0.$
For all $\epsilon>0$, $P(Y_n>M-\epsilon \text{ infinitely often})=1.$
To prove both of these, we will use the Borel-Cantelli lemmas, which involve looking the convergence of $\sum_{n\ge 1}P(X_n \ln\ln n/\ln n>1\pm \epsilon)$. Therefore, you will need a good estimate on $P(X_n\ge k)$, and apply this to $k=(\ln n/\ln \ln n)(1\pm \epsilon)$.
Assume $k$ is an integer for now. On the one hand, $P(X\ge k)\ge P(X=k)=\frac{e^{-1}}{k!}$. On the other hand, you can upper bound $P(X\ge k$ by a geometric series:
$$
P(X\ge k)
=\frac{e^{-1}}{k!}\sum_{i\ge k}{1 \over (k+1)(k+2)\cdots (i-i)\cdot i}
\le \frac{e^{-1}}{k!}\sum_{i\ge k}\frac1{k^{i-k}}
=\frac{e^{-1}}{k!}\cdot \frac{1}{1-1/k}
$$
This shows that $\lim_{k\to\infty}\frac{P(X\ge k)}{P(X=k)}=1$, so you can use the simple estimate $P(X\ge k)\approx e^{-1}/k!$ to determine the summability of $P(X_n\ge \ln n/(\ln \ln n)(1\pm \epsilon))$.
Therefore, you just need to show that the sum
$$
\sum_{n\ge 1}P\left(X_1\ge (1\pm \epsilon)\frac{\ln n}{\ln \ln n}\right)
$$
is finite when the sign is $+$, and infinite when the sign is $-$. This is equivalent to showing the same about the sums
$$
e^{-1}\sum_{n\ge 1}\frac1{\Big((1\pm \epsilon)\ln n/\ln \ln n\Big)!}
$$
To prove this sum is infinite/finite, use Stirlings approximation, which says $\lim_{k\to\infty}\frac{k!}{k^ke^{-k}\sqrt{2\pi k}}=1$, so you can replace $k!$ with $k^ke^{-k}\sqrt{2\pi k}$ without affecting convergence. The math will get a bit messy, but everything works out nicely in the end. There is also the small issue that the argument to the factorial is not an integer, so you really need to round down, but this does not affect things in the long run.
This is simple application of Borel Cantelli Lemma. If $\sum P(X_n \leq a_n)=\infty$ then $X_n \leq a_n$ holds infinitely often with probability $1$. So you only have to check that $\sum [1-e^{-(\ln n+ \ln \ln n +\ln^{2} \ln \ln n)}]=\infty$. Can you check this? [The general term of this series does not tend to $0$].
Best Answer
This is the Brezis-Lieb lemma and goes as follows: suppose that $p>1$.
Prove first that there is a constant $C$ depending on $p$ such that for all $a,b\in\mathbb{R}$ you have $$\left||a+b|^p-|a|^p-|b|^p \right|\leq C\left(|a|^{p-1}|b|+|a||b|^{p-1} \right).$$
Assume in addition that $\|\xi_n\|_{L^p}\leq M$ for some $M\in\mathbb{R}$ and prove that $$\mathbb{E}\left[|\xi_n|^p-|\xi_n-\xi|^p\right]\to \mathbb{E}[|\xi|].$$
Now remove the above assumption, and conclude.
For the case $p=1$.
Here is a way to prove the result without the Brezis-Lieb lemma. Indeed, for $p\geq 1$ we have the inequality $$|a-b|^p\leq 2^{p-1}(|a|^p+|b|^p),\;\;\text{ for all }a,b\in\mathbb{R}. $$ Therefore $$|\xi_n-\xi|^p\leq 2^{p-1}(|\xi_n|^p+|\xi|^p).$$ Also since $\xi_n \xrightarrow {a.e.} \xi$ then you clearly have that $$2^{p-1}(|\xi_n|^p+|\xi|^p)\xrightarrow {a.e.} 2^{p}|\xi|^p\in L^1\;\;\; (\text{since }\xi\in L^p),$$ and by the assumption $\mathbb{E}[|\xi_n|^p]\to \mathbb{E}[|\xi|^p]$ we have $$ \mathbb{E}[2^{p-1}(|\xi_n|^p+|\xi|^p)]\to \mathbb{E}[2^p\,|\xi|^p]. $$ Therefore by (a slightly stronger version of) dominated convergence applied to $$ f_n:=|\xi_n-\xi|^p, \;\;\; f:=0,\;\;\;g_n:=2^{p-1}(|\xi_n|^p+|\xi|^p),\;\;\;g:=2^p|\xi|^p,$$ you get that $\mathbb{E}[f_n]\to 0 $ as desired.