You can solve your second equation to give $\pi(1)=\frac{a}{1-p} \pi(0)$, then your third to give $\pi(2)=\frac{ap}{(1-p)^2} \pi(0)$, then your first to give $\pi(x)=\frac{a}{p}\left(\frac{p}{1-p}\right)^x \pi(0)$ for $0 \lt x \lt n$, and finally your fourth to give $\pi(n)=\frac{p}{b} \pi(n-1) = \frac{a}{b}\left(\frac{p}{1-p}\right)^{n-1} \pi(0)$. Your fifth equation is not independent of the others.
You can now add up the terms, noting the geometric progression in the middle, and set the sum equal to $1$ to solve for $\pi(0)$ and thus find all the values of $\pi(x)$.
You can see right away that these random variables are not independent from their support, as @Dilip Sarwate noted. In general if the joint support is not a product space, you can immediately conclude that the RVs are not independent. This is not a necessary condition though, so be careful.
This is a one-liner then but let's derive the marginal densities as well.
\begin{align} f_X (x)=\sum_{y=x}^{\infty} f_{X,Y} (x,y) & =\sum_{y=x}^{\infty} \frac{e^{-2}}{x! \left(y-x\right)!}\\ &= \frac{e^{-2}}{x!} \sum_{y=x}^{\infty} \frac{1}{\left(y-x\right)!} \\&= \frac{e^{-1}}{x!} \quad (\text{why?}) \end{align}
The marginal support of $X$ consists now of all nonnegative integers, i.e. $S_X=\left\{0,1,\ldots\right\}$. For $Y$ we similarly sum over all possible $X$ values.
\begin{align} f_Y (y)=\sum_{x=0}^{y} f_{X,Y} (x,y) &= \sum_{x=0}^{y}\frac{e^{-2}}{x! \left(y-x\right)!} \\ &= \frac{e^{-2}}{y!} \sum_{x=0}^{y} \binom{y}{x} \\ &=\frac{e^{-2}2^y}{y!} \quad (\text{why?}) \end{align}
Likewise, $S_Y=\left\{0,1,\ldots \right\}$. This pmf is very standard, can you recognize its type? From the marginal distributions now, it is obvious that $f_X(x) \times f_Y (y) \neq f_{X,Y} \left(x,y \right)$, as we had initially suspected.
What you should take out of this is a look at the joint support is a quick way to verify independence and that when summing/integrating over random variables to arrive at a marginal density it is very important to have all restrictions in place. As verification of your work, you can always check whether the resulting mass function/density sums/integrates to $1$.
Best Answer
The "mechanical" result of just plugging in $z = 1$ into the transfer response is essentially a product of two facts. The steady-state gain is (usually, I believe) defined as the (magnitude of the) limiting response as $t \to \infty$ of the system to a unit-step input.
The so-called final-value theorem states that, if the limit $\lim_{n \to \infty} y(n)$ exists, then $\lim_{n \to \infty} y(n) = \lim_{z \to 1} (1-z^{-1}) Y(z)$ where $y(n)$ is the time-domain output and $Y(z)$ is its corresponding $z$-transform.
Now, for the steady-state gain, the input is a unit-step function, so $x(n) = 1$ for each $n \geq 0$. Hence, $$ X(z) = \sum_{n=0}^\infty x(n) z^{-n} = \sum_{n=0}^\infty z^{-n} = \frac{1}{1-z^{-1}} . $$
Using the transfer equation, we get that the $z$-transform of the output is $$ Y(z) = X(z) H(z) = \frac{H(z)}{1-z^{-1}} . $$
(Assuming that the limit $\lim_{n\to\infty} y(n)$ exists) we have that $$ \lim_{n \to \infty} y(n) = \lim_{z \to 1} (1-z^{-1}) Y(z) = \lim_{z \to 1}\, H(z) . $$
The left-hand side is the steady-state value of a step-response (i.e., it is the value of the response as time goes to $\infty$ of a one-unit constant input), and so the steady-state gain is $|\lim_{n \to \infty} y(n)| = \lim_{n \to \infty} |y(n)|$.
Technically, you need to check that the limit exists (which I've tried to emphasize). It seems to me that a sufficient condition would be that all the poles of the transfer response be strictly inside the unit circle. (Caveat lector: I haven't checked that closely at all.)
If this does not sufficiently clarify things, you might try doing Google searches on terms like "dc gain" and "final-value theorem", which are closely related to what you want.