You are probably stuck because the random variable $N_n$ may assume far too many values for Kolmogorov's inequality to provide an effective upper bound. This suggests to deal separately with the case when $N_n$ is around $a_n$ (which, by Kolmogorov's inequality, should yield small values of $S_{N_n}-S_{a_n}$) and with the case when $N_n$ is far from $a_n$ (which, from the hypothesis that $N_n/a_n\to1$ in probability, should have small probability).
Hence, let us introduce, for a given positive $\varepsilon$, the event $$A_n=[(1-\varepsilon) a_n\leqslant N_n\leqslant (1+\varepsilon) a_n].$$
On the one hand, $N_n/a_n\to1$ in probability hence $A_n$ is typical in the sense that $\mathrm P(\Omega\setminus A_n)\to0$.
On the other hand, $|S_{N_n}-S_{a_n}|\leqslant |S_{N_n}-S_{(1-\varepsilon) a_n}|+|S_{a_n}-S_{(1-\varepsilon) a_n}|$ hence, on the event $A_n$,
$$
|S_{N_n}-S_{a_n}|\leqslant 2M_n,\qquad M_n=\sup\limits_{1\leqslant k\leqslant 2\varepsilon a_n}|T_k|,\qquad T_k=S_{(1-\varepsilon) a_n+k}-S_{(1-\varepsilon) a_n}.
$$
Now, we are back to the realm where Kolmogorov's inequality applies, and yields
$$
\mathrm P(M_n\geqslant x\sqrt{a_n})\leqslant (a_nx^2)^{-1}\mathrm{Var}(T_{2\varepsilon a_n})=(a_nx^2)^{-1}(2\varepsilon a_n)\sigma^2=2\varepsilon x^{-2}\sigma^2.
$$
Putting our two estimates together yields
$$
\mathrm P(|S_{N_n}-S_{a_n}|\geqslant 2x\sqrt{a_n})\leqslant\mathrm P(\Omega\setminus A_n)+\mathrm P(M_n\geqslant x\sqrt{a_n})\leqslant\mathrm P(\Omega\setminus A_n)+2\varepsilon x^{-2}\sigma^2.
$$
This proves that, for every positive $\varepsilon$,
$$
\limsup\limits_{n\to\infty}\ \mathrm P(|S_{N_n}-S_{a_n}|\geqslant 2x\sqrt{a_n})\leqslant2\varepsilon x^{-2}\sigma^2,
$$
hence $\mathrm P(|S_{N_n}-S_{a_n}|\geqslant2x\sqrt{a_n})\to0$ for every $x$, that is, $S_{N_n}/\sqrt{a_n}-S_{a_n}/\sqrt{a_n}\to0$ in probability.
By the usual central limit theorem, since $a_n\to+\infty$, $S_{a_n}/\sqrt{a_n}$ converges in distribution to a centered gaussian distribution with variance $\sigma^2$, hence $S_{N_n}/\sqrt{a_n}$ converges in distribution to the same centered gaussian distribution with variance $\sigma^2$.
The first part looks ok, but I would apply central limit theorem, not the law of large number. The lower bound of the probability of the $\limsup$ has to be justified (portmanteau theorem).
For part b), we can use the following idea: in the case $S_n/\sqrt n\to \chi$ in probability, we would have
$$Y_n:=\frac{S_{2n}}{\sqrt{2n}}-\frac{S_n}{\sqrt n}\to 0 \mbox{ in probability}.$$
But $Y_n=\frac{S_{2n}-S_n}{\sqrt{2n}}+\frac{S_n}{\sqrt n}\left(\frac 1{\sqrt 2}-1\right)=:Y'_n+Y''_n$. Since $S_{2n}-S_n$ is independent of $S_n$, we can compute the limit in distribution of each of the two terms which compose $Y_n$. Notice that $Y'_n$ has the same distribution as $S_n/\sqrt{2n}$ which converges in distribution to a centered normal random variable of variance $\sigma^2/2$, while $Y''_n$ converges in distribution to a centered normal random variable of variance $\sigma^2(1-\sqrt 2)^2/2$. Therefore, it can be shown that $Y_n$ converges to a non-degenerated Gaussian random variable.
Best Answer
(I have taken the variance to be $1$ but you only need a trivial modification when it is not $1$.
Let $0<M<\infty$. Then $P(\frac {S_n} {\sqrt n} <M) \to \Phi(M)$ where $\Phi$ is the standard normal distribution function. By Fatou's Lemma we get $P(\lim \inf \{(\frac {S_n} {\sqrt n} <M)\} \leq \Phi(M)$. This implies that $P\{\lim \sup \frac {S_n} {\sqrt n} <M\} \leq \Phi(M)<1$. By 0-1 law this implies that $P\{\lim \sup \frac {S_n} {\sqrt n} <M\}=0$ and this is true for each $M$. Hence $\lim \sup \frac {S_n} {\sqrt n}=\infty$ almost surely.