$\newcommand{\R}{\mathbb R}$
$\newcommand{\P}{\mathbf P}$
$\newcommand{\Z}{\mathbb Z}$
I found this question very interesting and gave it much thought this week. I believe I have a proof now. I think it would be interesting to see what generalizations one can get from this argument. Below I use probabilistic notation, which I'm more used to.
Let $\{X_n\}_{n\in \Z}$ be a stationary stochastic process, taking values in $\R$. Let $L=\limsup_{n\to -\infty} \frac{X_n}{|n|}$ and $R=\limsup_{n\to \infty} \frac{X_n}{|n|}$.
Theorem: $L=R$, almost surely.
Proof:
It is enough to prove the theorem for ergodic processes. We may also assume, WLOG, that all the values are nonnegative integers.
Let $A$ be the event that for some $n<0$ we have $X_n\ge |n|$ and let $B$ be the event that for some $n>0$ we have $X_n\ge |n|$.
Lemma: $\P(A) \le 2 \P(B) .$
Proof of lemma:
For a given realization of $X_n$, let $I$ be all indices $i$ for which $T^i X$ is in $A$ and let $J$ be all the indices for which $T^i X$ is in $B$. We claim that the density of $J$ is no more than twice the density of $I$, which then implies the conclusion.
$I$ can be written as $\cup_{n \in \Z} \{n,\ldots,n+X_n\}$, while $J=\cup_{n \in \Z} \{n-X_n,\ldots,n\}$. In particular, $J$ is contained in $\bar{J}=\cup_{n \in \Z} \{n-X_n,\ldots,n+X_n\}$. But if we write $I$ as a union of disjoint intervals, then in $\bar{J}$ each of these intervals is extended to the left by at most the length of the interval. Hence, the density of $\bar{J}\supset J$ is at most twice that of $I$. $\blacksquare$
Of course, by symmetry, we also have $\P(B) \le 2 \P(A)$. Let $A_K$ be the event that for some $n<0$ we have $X_n \ge \max(|n|,K)$ and define $B_K$ analogously. By applying the lemma to the process $\{Y_n\}_{n\in \Z}$ defined by $Y_n=X_n$ if $X_n\ge K$ and $Y_n=0$ otherwise, we get that $\P(A_K) \le 2 \P(B_K)$ and vice verse.
In particular, $\lim_{K\to \infty} \P(A_K) =0$ if and only if $\lim_{K\to \infty} \P(B_K) = 0$. These limits necessary exists, since these are monotone decreasing events.
For ergodic processes, we have that $L$ and $R$ are a.s. constant. Now, if $L<1$ a.s. then we have $\P(A_K)\to 0$. In the other direction, if $\P(A_K)\to 0$ then a.s. $L\le 1$. Similar implications hold for $R$ and $B_K$. Hence, we get that if $L<1$ then $R\le 1$ and vice verse. By applying these to a rescaled process $X_n / \alpha$, we get that for any $\alpha$ we have $L<\alpha$ implies $R\le \alpha$ (and vice verse), so we must have $L=R$. $\blacksquare$
An early occurence of such bounds is in the theorem of Theorem of vonBahr and Eseen
vonBahr, B., Esseen C.-G.: Inequalities for the rth absolute moment of a sum of random
variables, $1\leq r \leq 2$. Ann. Math. Statist. 36, No.1, 299-393 (1965).
Theorem: Let $X_i$ be independent (not necessarily i.i.d.) zero mean random variables. Then, for $\alpha\in [1,2]$,
$$E|\sum_{i=1}^n X_i|^\alpha\leq C_\alpha \sum_{i=1}^n E|X_i|^\alpha\,.$$
($C_\alpha$ is explicit)
Applying Markov's inequality in your setup gives
$$P(|S_n|>t)\leq C_\alpha n E|X_1|^\alpha t^{-\alpha} $$
as you wanted.
Note: the moment condition certainly does not imply convergence to $\alpha$-stable - one would need some regularly varying conditions on the tail for that.
Note2: A good summary (up to late 70s) of estimates of this kind (mostly from the Russian school) are in Nagaev's Annals of Probability paper (1979). Petrov's 1975 book is also a good resource - in particular the VonBahr-Essen bound is mentioned there (on page 60).
Best Answer
The answer is yes, and it is based on an idea by Prokhorov (cf. e.g. Theorem 10 in Section 3 of Ch. IX in [Petrov, V. V., Sums of independent random variables, Springer-Verlag, 1975]). We have $EX_n^2\le Cn$ for some real $C>0$ and all natural $n$. For natural $s$, let \begin{equation} T_s:=\max_{2^s\le n<2^{s+1}}\frac{|X_n-X_{2^s}|}{2^s}. \end{equation} Then, by [Doob's martingale inequality], for any real $t>0$ \begin{equation} P(T_s\ge t)\le\frac{E(X_{2^{s+1}}-X_{2^s})^2}{(2^s t)^2} \le\frac{EX_{2^{s+1}}^2}{(2^s t)^2}\le\frac{C2^{s+1}}{(2^s t)^2}=\frac{2C}{2^s t^2}, \end{equation} so that $\sum_{s=1}^\infty P(T_s\ge t)<\infty$. So, by the Borel--Cantelli lemma, $T_s\to0$ almost surely (a.s.) as $s\to\infty$. Therefore, for any natural $n$ and $r$ such that $2^r\le n<2^{r+1}$ (so that $r=r_n:=\lfloor\log_2 n\rfloor$), one has \begin{equation} \frac{|X_n-X_1|}n\le2 \frac{|X_n-X_1|}{2^{r+1}} \le2\frac1{2^{r+1}}\,\sum_{s=0}^r 2^s T_s\to0 \tag{1} \end{equation} a.s., since $\sum_{s=0}^r 2^s<2^{r+1}$; cf. e.g. Lemma 9 in Section 3 of Ch. IX in [Petrov, V. V., Sums of independent random variables, Springer-Verlag, 1975]). Thus, $\frac{X_n}n\to0$ a.s., as desired.
Details on $(1)$: Since $T_s\to0$ a.s., without loss of generality for each real $\epsilon>0$ there is a natural-valued random variable $R_\epsilon$ such that for any natural $s$ the event $s>R_\epsilon$ implies $T_s\le\epsilon$. Therefore,
\begin{equation} \frac1{2^{r+1}}\,\sum_{s=0}^r 2^s T_s \le\frac1{2^{r+1}}\,\sum_{s=0}^{R_\epsilon} 2^s T_s+\frac1{2^{r+1}}\,\sum_{s=R_\epsilon+1}^r 2^s \epsilon \le\frac1{2^{r+1}}\,\sum_{s=0}^{R_\epsilon} 2^s T_s+\epsilon. \end{equation} So, $\limsup_{r\to\infty}\frac1{2^{r+1}}\,\sum_{s=0}^r 2^s T_s\le\epsilon$, for any $\epsilon>0$, and hence $\lim_{r\to\infty}\frac1{2^{r+1}}\,\sum_{s=0}^r 2^s T_s=0$.