Before answering the question, I want to stress I consider this homework question as one of the (much too few) homework questions asked on this site whose author respects the recommendations for asking homework questions: mainly, to show what one has tried and where one is stuck.
In the present case, why is the OP stuck? My guess is that this is due to the conjunction of two facts:
(1) The OP made a mistake in the computations presented.
(2) The OP might not be aware of some automatic ways to deduce the distribution of a random variable from its Laplace transform when the Laplace transform is a rational fraction.
We start with the Pollaczek-Khinchin formula for $\tilde W(s)=E(\mathrm{e}^{-sW})$. Plugging the values of $\tilde B(s)$, $\lambda$ and $\rho$ the OP mentions into this formula yields a different expression than the one the OP computed, namely,
$$
\tilde{W}(s) = \frac{\frac12s}{\frac12(\frac{1}{2+s} + \frac{1}{2+3s}) + s - \frac{1}{2}}.
$$
We note that this formula for $\tilde{W}(s)$ passes a test the one proposed by the OP does not, namely that $\tilde{W}(s)$ does not degenerate when $s\to0$.
Unsurprisingly, we can simplify $\tilde{W}(s)$ and a factor $s$ appears in the denominator, which allows to get rid of the factor $s$ in the denominator. When the dust has settled, we get the key formula
$$
\tilde{W}(s) = \frac{4+8s+3s^2}{4+13s+6s^2}.
$$
We note that $\tilde{W}(0)=1$, as should be.
First use of the key formula: to compute the value of $P(W=0)$
When $s\to+\infty$, $\mathrm{e}^{-sW}\to\mathbb{1}_{W=0}$ hence $\tilde{W}(s)\to P(W=0)$. Keeping only the $s^2$ terms of the numerator and the denominator yields
the value the OP computed, that is,
$$
P(W=0)=\frac{3}{6}=\frac{1}{2}.
$$
Second use of the key formula: to compute the value of $E(W)$
Classically $E(W)=-\tilde{W}'(0)$. Writing $\tilde{W}(s)$ as $\tilde{W}(s) = N(s)/D(s)$, we get the value the OP computed, that is,
$$
E(W)=-\frac{N'(0)}{D(0)}+D'(0)\frac{N(0)}{D(0)^2}=-\frac{8}{4}+13\frac{4}{4^2}=\frac{5}{4}.
$$
Third use (more involved) of the key formula: to compute the value of $P(W>1)$
As noted by the OP, there seems to be no other way than
to compute the whole distribution of $W$. We already know this distribution has an atom $\frac12$ at $w=0$ and we can guess it has an absolutely continuous part described by a density $\frac12f(w)$ on $w>0$. Thus, we want $f$ such that for every $s$,
$$
Q(s)=\int_0^{+\infty}\mathrm{e}^{-sw}f(w)\mathrm{d}w,\quad\mbox{where}\ Q(s)=2\tilde{W}(s)-2P(W=0).
$$
Now, $Q(s)$ is a rational function in $s$. If this rational function has simple poles, sooner or later we will want to inverse expressions like
$$
\frac1{s+c}=\int_0^{+\infty}\mathrm{e}^{-sw}f_c(w)\mathrm{d}w.
$$
We should keep in mind the solution $f_c$ of this elementary case, which is simply
$$
f_c(w)=\mathrm{e}^{-cw}.
$$
From this point, our task is clear: to decompose $Q(s)$ into a linear combination of simple rational functions $1/(s+c)$ and to identify $f$ as the corresponding linear combination of functions $f_c$. If I am not mistaken,
$$
Q(s)= \frac{4+3s}{4+13s+6s^2}=\frac{a}{s+c}+\frac{a'}{s+c'},
$$
where $(a,a',c,c')$ solves
$$
a+a'=\frac12,\ ac'+a'c=\frac23,\ c+c'=\frac{13}6,\ cc'=\frac23.
$$
Hence
$$
f(s)=a\mathrm{e}^{-cs}+a'\mathrm{e}^{-c's}.
$$
A last test to see if our computations went astray: since $f$ is the density of a measure of mass $1$, we should have $\displaystyle\frac{a}{c}+\frac{a'}{c'}=1$.
Finally, all this yields
$$
P(W>1)=\frac12\left(a\frac{\mathrm{e}^{-c}}c+a'\frac{\mathrm{e}^{-c'}}{c'}\right),
$$
and I think the numerical values needed to compute this are
$$
a=\frac{u-3}{4u},\ a'=\frac{u+3}{4u},\ c=\frac{13+u}{12},\ c'=\frac{13-u}{12},\
u=\sqrt{73},
$$
hence,
$$
P(W>1)=\frac14\mathrm{e}^{-13/12}\left(\left(1-7/u\right)\mathrm{e}^{-u/12}+\left(1+7/u\right)\mathrm{e}^{u/12}\right).
$$
The step where you assert that
$$
\mathrm P(S_n<m\mid M_{n-1}<m)=\mathrm P(S_n<m\mid S_{n-1}<m)
$$
because $(S_n)_{n\ge0}$ is a Markov chain, is a beautiful example of the subtle errors one can make when using the Markov property.
To see that this equality is false in general, consider an inhomogenous random walk $(S_n)$, exactly like in your post. Fix a time $n\ge2$, and assume that $X_{n-1}=-1$ or $-2$ but that $X_k=\pm1$ or $0$ for every $0\le k\le n$ such that $k\ne n-1$.
Then, $X_{n-1}\le0$ almost surely hence $[M_{n-1}<1]=[M_{n-2}<1]$. And $X_{n-1}\le-1$ almost surely hence $[M_{n-2}<1]\subseteq[S_{n-2}<1]\subseteq[S_{n-1}<0]$. And $X_n\le1$ almost surely hence, conditionally on $[S_{n-1}<0]$, $S_n<1$ almost surely. This shows that
$$
\mathrm P[S_n<1\mid M_{n-1}<1]=1.
$$
On the other hand, $[S_{n-1}=0]$ has positive probability because $n\ge2$, and $[S_{n}=1]$ contains $[S_{n-1}=0]\cap[X_{n}=1]$, hence
$$
\mathrm P[S_n<1\mid S_{n-1}<1]<1.
$$
Added in note The key properties of the random walk in this counterexample are that $X_{n-1}\le-x$ and $X_n\le x$ almost surely, for a given positive $x$, and that both events $[S_{n-1}=0]$ and $[X_n\ge1]$ have positive probability.
Best Answer
Fixed $n$, the R.V. $S_{n+1}$ has Erlang density, i.e., \begin{equation} \nonumber f_{S_{n+1}} (t) = \lambda^{n+1} \frac{t^n}{n!} e^{-\lambda t} \end{equation} which implies \begin{equation} \nonumber \Pr(S_{n+1}\le t) = \int_0^t \lambda^{n+1} \frac{z^n}{n!} e^{-\lambda z} \mbox{d}z. \end{equation} Therefore $$ \Pr(S_N\le t) = \Pr\left ( \bigcup_{n\ge 0} \left\{ S_N\le t \cap N=n\right\} \right)\\ = \sum_{n\ge 0} \Pr\left ( S_{N}\le t \cap N=n \right) \\ = \sum_{n\ge 0} \Pr\left ( S_{N}\le t| N=n\right) \Pr\left (N=n \right)\\ = \sum_{n\ge 0} \Pr\left ( S_{n+1}\le t\right) \Pr\left (N=n \right)\\ = \sum_{n\ge 0} \int_0^t \lambda^{n+1} \frac{z^n}{n!} e^{-\lambda z} \mbox{d}z \,p^n (1-p)\\ = \lambda (1-p) \int_0^t \sum_{n\ge 0} \frac{(p \lambda z)^n}{n!} e^{-\lambda z} \mbox{d}z \\ = \lambda (1-p) \int_0^t e^{p \lambda z} e^{-\lambda z} \mbox{d}z \\ = \lambda (1-p) \int_0^t e^{- \lambda(1-p) z} \mbox{d}z\\ = 1- e^{- \lambda(1-p) t}, $$ for which we recognize that $S_n$ is a random variable with the exponential law of parameter $\lambda(1-p)$. Note that I could invert the integral and summation because the involved terms are all positive.
The same conclusion can be obtained with a characteristic function argument.