I figured the remarks I gave in the comments deserve to be gathered up into a more coherent form as an answer.
One thing I will start with is that comparing $L(n)$ (or $M(n)$, the Mertens function) to the values $S_n$ is arguably not the right heuristic. Indeed, $S_n$ averages over all random walks, which can have their peaks and troughs at different places, and so they cancel out. Analysing a single random walk gives a different asymptotic: this is governed by the law of the iterated logarithm, which asserts that the random walk will oscillate with extremal values on the order of $\pm\sqrt{2n\log\log n}$.
However it turns out that for subtle arithmetic reasons we do not expect the arithmetic functions of interest to reach the same bounds. Most literature (see e.g. this paper) concerns the Mertens function $M(n)$ so I will discuss this one, but all the relevant heuristics should also hold for $L(n)$. An unpublished conjecture of Gonek is that the maximal order of $|M(n)|$ is not $\sqrt{n\log\log n}$, but rather $\sqrt{n}(\log\log\log n)^{5/4}$, which in particular implies that the ratio $|M(n)|/\sqrt{n}$ is unbounded.
While Gonek's conjecture is still open, as is unboundedness of $|M(n)|/\sqrt{n}$, we do have unconditional results which imply that this sequence does not have a limit. Specifically, we know that $M(n)/\sqrt{n}$ has strictly negative liminf and strictly positive limsup (indeed, they are at least $\pm1.8$ respectively, see Wikipedia for more details and references.) This shows that $|M(n)|/\sqrt{n}$ crosses zero infintiely often, and infinitely often it takes values greater than $1$, so it cannot have a limit. (Same is known for Liouville though with different bounds on liminf and limsup, see Wikipedia.)
$\newcommand\1{\mathbf1}\newcommand\R{\mathbb R}\newcommand\Si{\Sigma}\newcommand{\si}{\sigma}\newcommand{\Ga}{\Gamma}$Letting
$$y_i:=g(z_i)z_i=z_iz_i^\top\1,$$
where $\1:=[1,\dots,1]^\top\in\R^d$, we get
$$Ey_i=\Si\1.$$
So, by the strong law of large numbers
$$x_s=\frac1s\,\sum_1^s y_i\to\Si\1$$
almost surely as $s\to\infty$, so that
\begin{equation*}
\|x_s\|^2\to \si_d^2:=\1^\top\Si^2\1.\tag{1}\label{1}
\end{equation*}
almost surely as $s\to\infty$.
Also, by the Jensen inequality for the function $(0,\infty)\ni x\mapsto1/x$,
\begin{equation*}
E\frac1{\|x_s\|^2}\ge\frac1{E\|x_s\|^2}. %=\frac1{\si_d^2}.
\tag{2}\label{2}
\end{equation*}
Moreover, we can show that $\dfrac1{\|x_s\|^2}$ is uniformly integrable (see Lemma 1 below). So, one immediately gets from \eqref{1} that
\begin{equation*}
E\frac1{\|x_s\|^2}\to\frac1{\si_d^2}. \tag{3}\label{3}
\end{equation*}
So, the relevant quantity here is, not $\text{Tr}(\Si^2)$, but $\si_d^2=\1^\top\Si^2\1$ -- the sum of all entries of the matrix $\Si^2$.
Lemma 1: $\dfrac1{\|x_s\|^2}$ is uniformly integrable.
Proof: By the Cauchy--Schwarz inequality,
\begin{equation*}
\|x_s\|\ge\frac1{\sqrt d}\,x_s^\top\1
=\frac1{\sqrt d}\frac1s\sum_{i=1}^s y_i^\top\1
=\frac1{s\sqrt d}\sum_{i=1}^s (z_i^\top\1)^2=:R.
\end{equation*}
Next, $z_i^\top\1\sim N(0,\si_d^2)$ and hence $(z_i^\top\1)^2\sim Gamma(1/2,2\si_d^2)$ and $R\sim Gamma(s/2,2\si_d^2/(s\sqrt d))$. So, for $s>6$,
\begin{equation*}
\begin{aligned}
E\frac1{\|x_s\|^3}\le E\frac1{R^3}
&=\Big(\frac{s\sqrt d}{2\si_d^2}\Big)^3
\frac1{\Ga(s/2)}\int_0^\infty u^{s/2-1-3} e^{-u}\,du \\
&=\Big(\frac{s\sqrt d}{2\si_d^2}\Big)^3 \frac{\Ga(s/2-3)}{\Ga(s/2)}
\underset{s\to\infty}\sim \Big(\frac{s\sqrt d}{2\si_d^2}\Big)^3 \frac1{(s/2)^3}
=\Big(\frac{\sqrt d}{\si_d^2}\Big)^3<\infty.
\end{aligned}
\end{equation*}
So, by the de la Vallée-Poussin theorem, $\dfrac1{\|x_s\|^2}$ is uniformly integrable. $\quad\Box$
Following the lines of the proofs of the de la Vallée-Poussin theorem and Lemma 1, one can obtain an upper bound on the difference $%E\frac1{\|x_s\|^2}-\frac1{E\|x_s\|^2}=
E\frac1{\|x_s\|^2}-\frac1{\si_d^2}$.
Note also that
\begin{equation*}
E\|x_s\|^2=\frac1{s^2}\sum_{i,j=1}^d Ey_i^\top y_j
=\frac1{s^2}\sum_{i,j=1}^d Ez_i^\top\1 z_j^\top\1 z_i^\top z_j
=\frac1s\,Ez_1^\top\1 z_1^\top\1 z_1^\top z_1
+\frac{s(s-1)}{s^2}Ez_1^\top\1 z_2^\top\1 z_1^\top z_2
\end{equation*}
and
\begin{equation*}
Ez_1^\top\1 z_2^\top\1 z_1^\top z_2
=\sum_{i,j,k=1}^d Ez_{1i}z_{2j}z_{1k}z_{2k}
=\sum_{i,j,k=1}^d \Si_{ik}\Si_{kj}
=\sum_{i,j=1}^d (\Si^2)_{ij}=\1^\top \Si^2\1=\si_d^2,
\end{equation*}
where $z_{ri}$ is the $i$th coordinate of the random vector $z_r$.
So,
\begin{equation*}
E\|x_s\|^2\sim \si_d^2
\end{equation*}
as $s\to\infty$.
Best Answer
It seems no one checked this calculation prior to the publication of this paper. The factor $\frac{\lambda}{1-\lambda}$ in (1) seems mistaken, it should be $\frac{2-\lambda}{1-\lambda}$. This has no effect on the finiteness claimed in the lemma. The denominator in (1) indeed takes the given form because all the other summands in that denominator vanish due to the indicator in the numerator.
To go from (1) to a variant of (2) observe that $$\beta_n(I) \mathbf{1}_{\{ \beta(j)=0 \ \forall j\neq I\}}=\sum_{i=1}^{\nu(e)}\mathbf{1}_{\{I=i\}} \beta_n(i) \mathbf{1}_{\{ \beta(j)=0 \ \forall j\neq I\}} \quad (*)\,.$$ Next we wish to take conditional expectations given $\nu(e)$. For each $i,j \le \nu(e)$ we have $$ \mathbb{E} \biggl[ \frac{\beta_n(i) \mathbf{1}_{\{I=i\}}}{\lambda-1+\beta_n(i)} \,\bigg| \, \nu(e)\biggl] =\frac{\beta_{n-1}(e)\mathbf{1}_S} {\lambda-1+\beta_n(e)} $$ by the branching property, and $P[\beta(j)=0 |\nu(e)] =q$ since $\lambda<1$. Thus using independence of the subtrees determined by the offspring of $e$, we get $$\mathbb{E} \biggl[ \frac{\beta_n(I) \mathbf{1}_{\{ \beta(j)=0 \ \forall j\neq I\}}}{\lambda-1+\sum_{i=1}^{\nu(e)}\beta_n(i)} \,\bigg| \, \nu(e)\biggl] \le \nu(e) q^{\nu(e)-1} \frac{\beta_{n-1}(e)\mathbf{1}_S} {\lambda-1+\beta_n(e)} \,. $$ Now take another expectation.