The mean of $S_n$ is $n\mu$ and the variance of $S_n$ is $n\sigma^2$ so the standard deviation of $S_n$ is $\sqrt{n}\sigma$
This has the consequence that $\dfrac{S_n}{n}$ has mean $\mu$ and variance $\dfrac{\sigma^2}{n}$ and standard deviation $\dfrac{\sigma}{\sqrt{n}}$. I suspect that this is what led to your phrase "convergence happens with rate $\sqrt{n}$"
So defining $T_n = \dfrac{S_n-n\mu}{\sqrt{n}\sigma}$, you can find $T_n$ has mean $0$ and variance $1$ and standard deviation $1$
What the central limit theorem says is that this converges in distribution to a standard normal as $n$ increases, i.e
$$\lim_{n\to\infty} \Pr\left[T_n \le t\right] = \Phi\left(t\right)$$
or
$$\lim_{n\to\infty} \Pr\left[\dfrac{S_n-n\mu}{\sqrt{n}\sigma} \le t\right] = \Phi\left(t\right)$$
or
$$\lim_{n\to\infty} \Pr\left[\sqrt{n}\dfrac{\frac{S_n}{n}-\mu}{\sigma} \le t\right] = \Phi\left(t\right)$$
and letting $z=t\sigma$
$$\lim_{n\to\infty} \Pr\left[\dfrac{S_n-n\mu}{\sqrt{n}} \le z\right] = \Phi\left(\frac{z}{\sigma}\right)$$
or
$$\lim_{n\to\infty} \Pr\left[\sqrt{n}\left(\frac{S_n}{n}-\mu\right)\le z\right] = \Phi\left(\frac{z}{\sigma}\right)$$
which is not quite what you have written, as it is a description of the behaviour of $\frac{S_n}{n}$
Under the conditions stated above, the following conclusion is true,
\begin{equation*} \frac{1}{\sqrt{n}}\sum_{k=1}^{n}(Y_k-C_1\mathsf{E}[X_1^2])\overset{d}{\longrightarrow} N(0,\sigma^2). \tag{1}
\end{equation*}
This conclusion can be proved by CLT of array of MD (martingale difference).
(cf. P. Hall and C. C. Heyde, Martingale Limit Theory and Its Application, Academic Press(1980), Th.3.2, p.58--).
The following is an outline of the proof.
Denote
\begin{equation*}
W_0=0, \quad W_{k-1}=\sum_{j=1}^{k-1}d_{k,j}X_j , \quad k\ge 2.
\end{equation*}
Then
\begin{gather*}
Y_k=\Big(\sum_{j=1}^{k}d_{k,j}X_j \Big)X_k
=W_{k-1}X_k+d_{k,k}X_k^2\\
\mathsf{E}[Y_k]=d_{k,k}\mathsf{E}[X_k^2]=C_1\mathsf{E}[X_1^2]
\overset{\triangle}{=}m.
\end{gather*}
Let
\begin{gather*}
Z_{n,k}=\frac{1}{\sqrt{n}}(Y_k-m), \quad 1\le k\le n, n\ge 1.\\
\mathscr{F}_{k}=\sigma\{ X_j,1\le j\le k\}\vee\mathscr{N},\quad 1\le k\le n, n\ge 1.
\end{gather*}
Then
\begin{equation*}
\mathsf{E}[Z_{n,k}|\mathscr{F}_{k-1}]
=\frac{1}{\sqrt{n}}(W_{k-1}\mathsf{E}[X_k]+
d_{k,k}\mathsf{E}[X_k^2]-m)=0.
\end{equation*}
and $ Z=\{ Z_{n,k}, \mathscr{F}_{k}, 1\le k\le n, n\ge 1\}$ is a MD-array(array of martingale difference). Now we verify that the $Z$ satisfy the conditions of $S_n=\sum_{k\le n}X_{n,k} \overset{d}{\to} N(0,\sigma^2)$.
At first, due to the $\{X_i,i\ge 1\} $ are bounded, the $\{W_i,i\ge 1\}, \{Y_i,i\ge 1\} $
are bounded too, and
\begin{equation*}
\max_{1\le k\le n}|Z_{n,k}|\le \frac{C}{\sqrt{n}} \tag{2}
\end{equation*}
where and latter the $C$ is a constant, irrelated $k,n$, in different expression $C$ may be different.
Secondly, using direct calculation, the following holds,
\begin{align*}
\lim_{k\to\infty}\frac1k\sum_{j=1}^{k}W_j&=0, \quad \text{a.s.} \tag{3}\\
\lim_{k\to\infty}\frac1n\sum_{j=1}^{n}W_j^2&=b>0,\quad\text{a.s.} \tag{4}
\end{align*}
Hence,
\begin{align*}
&\mathsf{E}[Z_{n,k}^2|\mathscr{F}_{k-1}]\\
&\quad =\frac1n\mathsf{E}[(W_{k-1}X_k+C_1(X_k^2-\mathsf{E}[X_k^2]))^2 | \mathscr{F}_{k-1}]\\
&\quad =\frac1n[W_{k-1}^2\mathsf{E}[X_k^2]]+C_1^2\mathsf{E}[(X_k^2-\mathsf{E}[X_k^2])^2]\\
&\qquad +2C_1W_{k-1}\mathsf{E}[(X_k^2-\mathsf{E}[X_k^2])X_k],\\
&\sum_{k=1}^{n}\mathsf{E}[Z_{n,k}^2|\mathscr{F}_{k-1}]\\
&\quad = \frac1n \sum_{k=1}^{n}W_k^2\mathsf{E}[X_1^2] +
C_1^2 \mathsf{E}[(X_1^2-\mathsf{E}[X_1^2])^2] \\
&\qquad + \frac{2C_1\mathsf{E}[(X_1^2-\mathsf{E}[X_1^2])X_1]}n \sum_{k=1}^{n}W_{k-1}\\
&\quad \to b\mathsf{E}[X_1^2]+C_1^2 \mathsf{E}[(X_1^2-\mathsf{E}[X_1^2])^2]
\overset{\triangle}{=}\sigma^2. \tag{5}
\end{align*}
At last, from (2) and (5), we have
\begin{align*}
\frac{1}{\sqrt{n}}\sum_{k=1}^{n}(Y_k-C_1\mathsf{E}[X_1^2])
=\sum_{k=1}^{n}Z_{n,k}=S_n\overset{d}{\longrightarrow}N(0,\sigma^2).
\end{align*}
i.e. (1) is true.
Best Answer
I think you've basically defined it. You can say a sequence $Y_n$ of random variables converges of order $a_n$ if $Y_n/a_n$ converges in distribution to a random variable which isn't identically zero. The reason to have division instead of multiplication is so that $Y_n = a_n$ itself converges of order $a_n$. You should think of this as meaning "$Y_n$ grows or decays at about the same rate as $a_n$".
This is Slutsky's theorem: if $Z_n \to Z$ in distribution and $c_n \to c$, then $c_n Z_n \to cZ$ in distribution. So suppose $Y_n$ converges of order $a_n$, so that $\frac{Y_n}{a_n}$ converges in distribution to some nontrivial $W$. If $b_n / a_n \to \infty$, then $\frac{Y_n}{b_n} = \frac{Y_n}{a_n} \frac{a_n}{b_n} \to W \cdot 0$, taking $Z_n = \frac{Y_n}{a_n}$, $Z=W$, $c_n = \frac{a_n}{b_n}$, and $c=0$ in Slutsky. So $Y_n$ does not converge of order $b_n$.
On the other hand, if $\frac{b_n}{a_n} \to 0$, suppose to the contrary $\frac{Y_n}{b_n}$ converges in distribution to some $Z$. Then $\frac{Y_n}{a_n} = \frac{Y_n}{b_n} \frac{b_n}{a_n} \to 0 \cdot Z$ by Slutsky. But $\frac{Y_n}{a_n}$ was assumed to converge in distribution to $W$ which is not zero. This is a contradiction, so $Y_n$ does not converge of order $b_n$.
But there isn't generally a unique sequence here. If $Y_n$ converges of order $\frac{1}{n}$, it would also be true to say $Y_n$ converges of order $\frac{1}{n+43}$, or $\frac{1}{n+\log n}$, or $\frac{1}{2n}$, et cetera.
Not sure what you mean here, as this is just a restatement of the CLT, whose proof you seem to know.