With your assumptions the best you can get is via Fatou's Lemma:
$$\mathbb{E}[|X|]\leq \liminf_{n\to\infty}\mathbb{E}[|X_n|]$$
(where you used the continuous mapping theorem to get that $|X_n|\Rightarrow |X|$).
For a "positive" answer to your question: you need the sequence $(X_n)$ to be uniformly integrable:
$$\lim_{\alpha\to\infty} \sup_n \int_{|X_n|>\alpha}|X_n|d\mathbb{P}= \lim_{\alpha\to\infty} \sup_n \mathbb{E} [|X_n|1_{|X_n|>\alpha}]=0.$$
Then, one gets that $X$ is integrable and $\lim_{n\to\infty}\mathbb{E}[X_n]=\mathbb{E}[X]$.
As a remark, to get uniform integrability of $(X_n)_n$ it suffices to have for example:
$$\sup_n \mathbb{E}[|X_n|^{1+\varepsilon}]<\infty,\quad \text{for some }\varepsilon>0.$$
I came up with some solutions, although they might be somewhat more complicated than necessary!
- Let $\epsilon>0$. As mentioned, according to the standard CLT, $S_n/\sqrt{n} \stackrel{D}{\to}N(0,1)$. Using this fact, choose $0<b$ so that, for all $n\ge N_0$, $P(S_n/\sqrt{n} > b ) < \epsilon$. Now, the probability of interest can be decomposed into two terms:
$P(S_n = k^2,\;\;\;\mbox{ for some k})=P(S_n = k^2,\;\;\;\mbox{ for some k}, \; S_n<b\sqrt{n})+P(S_n = k^2,\;\;\;\mbox{ for some k}, \; S_n>b\sqrt{n}).$
for $n>N_0$, the second term on the right hand side of the above equation is less than $\epsilon$.
Regarding the first term, note that the number of possible values $k^2$ such that $0 < k^2 < b\sqrt{n}$ is no more than $\sqrt{b}n^{1/4}$. The most likely single value that the random variable $S_n$ takes is zero, and $P(S_n=0) ={n \choose n/2}(1/2)^n.$ Using Stirlings formula to approximate the factorials in ${n \choose n/2}$ gives ${n \choose n/2} \le C2^n/\sqrt{n}$, which in turn gives $P(S_n=0) \le C1/\sqrt{n}$, where $C$ is an absolute constant. Since this is the most likely value
$P(S_n = k^2,\;\;\;\mbox{ for some k}, \; a\sqrt{n}<S_n<b\sqrt{n}) \le \mbox{ [number of terms]$\times$ [largest possible probability] }= \frac{C\sqrt{b}}{n^{1/4}}< \epsilon$
For $n$ sufficiently large, say $n \ge N_1$. Thus for $n \ge \max \{N_0,N_1\}$, $P(S_n = k^2,\;\;\;\mbox{ for some k}) < 2\epsilon$, completing the proof.
- We want to evaluate
$\lim_{n\to \infty} \frac{\log P(S_n/n > t)}{n}$
As $-n \le S_n \le n$, the limit is clearly not defined if $t>1$, and is always zero if $t<0$, so the interesting case is $t\in(0,1)$. Notice that $S_n/2 \stackrel{D}{=} B_n - n/2$, where $B_n$ is a Binomial random variable with parameters $n$ and $1/2$.
Two ingredients I use here are:
a) Hoeffdings inequality: $P( B_n > (t+1/2)n) \le e^{-2t^2n}$
b) Tail bounds for the normal distribution: If $Z\sim N(0,1)$, $(\frac{1}{\sqrt{2 \pi}t}-\frac{1}{\sqrt{2 \pi}t^3})e^{-t^2/2} \le P(Z>t) \le \frac{1}{\sqrt{2 \pi}t}e^{-t^2/2}$
Now, by multiplying and dividing by $P(Z > t\sqrt{n})$ inside the logarithm,
$\frac{\log P(S_n/n > t)}{n}= \frac{1}{n}\log \left(\frac{P(S_n/n > t)}{P(Z > \sqrt{n}t)}\right) + \frac{\log P(Z> t\sqrt{n})}{n}.$
Note that $P(S_n/n > t)= P(B_n > (t/2 + 1/2)n) \le exp(-t^2n/2)$, using Hoeffdings inequality. With this and the tail bound for the normal distribution,
$
\frac{P(S_n/n > t)}{P(Z > \sqrt{n}t)} \le \frac{exp(-t^2n/2)}{({1}/{\sqrt{2 \pi}t\sqrt{n}}-{1}/{\sqrt{2 \pi}(t\sqrt{n})^3})exp(-t^2n/2) } = \frac{1}{({1}/{\sqrt{2 \pi}t\sqrt{n}}-{1}/{\sqrt{2 \pi}(t\sqrt{n})^3}) }.
$
Elementary arguments show that
$\frac{1}{n}\log\left(\frac{1}{({1}/{\sqrt{2 \pi}t\sqrt{n}}-{1}/{\sqrt{2 \pi}(t\sqrt{n})^3}) }\right) \to 0$ as $n\to \infty$, and so the limit of interest is governed by $\lim_{n\to \infty} \frac{\log P(Z> t\sqrt{n})}{n}.$ This limit is $-t^2/2$, which can be arrived at using L'Hopitals rule, or the same tail bounds for the normal distribution metioned above. Hence, $\lim_{n\to \infty} \frac{\log P(S_n/n > t)}{n}=-t^2/2,$ $t\in (0,1)$.
Best Answer
Convergence in distribution does not imply convergence of the first moment. For example consider $X_n$ defined by \begin{align}P(X_n = 0) &= 1-\frac{1}{n} \\ P(X_n = n) &= \frac{1}{n} \end{align} Then $X_n \Rightarrow 0$ in distribution but $E[X_n] = 1 \nrightarrow 0$. To prove convergence in distribution, take $f$ a bounded real function, then $$E[f(X_n)] = \big(1-\frac{1}{n}\big)\,f(0) + \frac{1}{n}\,f(n) \longrightarrow f(0) = E[f(0)]$$ To get the convergence you want, you should first use the law of large numbers: $$\frac{1}{n}\sum_{i=1}^{n}X_i \longrightarrow \mu \ \ \text{a.s.}$$ then argue that the $\overline{X}_n = \frac{1}{n}\sum_{i=1}^{n}X_i$ are uniformly integrable (for instance because they are bounded in $L^2$). Now applying Vitali convergence theorem proves that $\overline{X}_n \to \mu$ in $L^1$.