The recurrent classes do not affect each other. Once we get into one recurrent class, we never leave, and the structure of the other recurrent class is irrelevant.
Just take an example, say with 4 states and transition probability matrix $(P_{ij})$ given by:
$$ (P_{ij}) = \left[ \begin{array}{cccc}
0 & 1/2 & 1/2 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0
\end{array}
\right] $$
State 1 is transient. State 2 forms an aperiodic recurrent class: If we get to state 2 then we always stay there. States 3 and 4 form a periodic recurrent class: If we get to state 3, we bounce around (periodically) between 3 and 4.
So:
-Given we start in state 2: We have a steady state distribution of $\pi=[0,1,0,0]$ (the limiting probabilities converge to this, and the time average fractions of time also converge to this with probability 1).
-Given we start in state 3: The limiting probabilities do not converge (they oscillate depending on even or odd slots), but the time averages converge to $p = [0,0,1/2,1/2]$ with probability 1.
-Given we start in state 1: Then the time averages will certainly converge, but they will not converge to a constant vector with probability 1. Rather, they will converge to a random vector. What they converge to will depend on the outcome of the first transition. If $p=[p_1,p_2,p_3,p_4]$ are the time averages, then given we start in state 1 we get:
$$ p = \left\{ \begin{array}{ll}
[0, 1, 0, 0] &\mbox{ with prob $1/2$} \\
[0, 0, 1/2, 1/2] & \mbox{ with prob $1/2$}
\end{array}
\right. $$
In general, for a finite state discrete time Markov chain with $K$ recurrent classes, each recurrent class $k \in \{1, \ldots, K\}$ has a unique probability distribution $\pi_k$ that satisfies $\pi_k = \pi_k P$ (where $P$ is the transition probability matrix) and such that $\pi_k$ has support only on the states of recurrence class $k$. If we start at a state in recurrence class $k$, then with probability 1 the time averages converge to $\pi_k$. If we start in a transient state, then the time averages will converge to a random vector $p$. We will eventually end up in one of the recurrent states (being the one we first visit). We can define $\theta_k$ as the probability we first visit recurrence class $k$ (defined for each $k \in \{1, \ldots, K\}$). Then $p$ is a random vector with $p = \pi_k$ with probability $\theta_k$, for $k \in \{1, \ldots, K\}$.
What is being said here is just convergence in distribution, and that fact is actually somewhat vacuous. The whole situation here is that you haven't specified an actual sequence of random variables, you've only specified the sequence of distributions given by $A^n q$. A Markov chain also introduces a corresponding sequence of random variables; in particular, given an initial distribution and $\omega \in \Omega$, we can obtain a sample path. But that's another matter entirely from the usual notion of "steady state" for Markov chains. In particular, a Markov chain will typically not converge a.s.; this would mean that the sequence $X_n$ converges to a (randomly chosen) state. Since the state space is discrete, that means the sequence is eventually constant (for a fixed $\omega$). That certainly doesn't happen for, say, $A=\begin{bmatrix} 1/2 & 1/2 \\ 1/2 & 1/2 \end{bmatrix}$. In this case soon enough there will be another transition, there is no "last" transition in the sequence.
Best Answer
Because this is a doubly stochastic matrix (both the rows and the columns sum to one, the unique stationary distribution is the uniform distribution over $\{0,1,2,3,4\}$. Indeed, \begin{align} &\quad\frac15\begin{pmatrix}1&1&1&1&1\end{pmatrix}\begin{pmatrix}0&q&0&0&1-q\\ 1-q&0&q&0&0\\ 0&1-q&0&q&0\\ 0&0&1-q&0&q\\ q&0&0&1-q&0\end{pmatrix}\\ &=\begin{pmatrix}0&q&0&0&1-q\\ 1-q&0&q&0&0\\ 0&1-q&0&q&0\\ 0&0&1-q&0&q\\ q&0&0&1-q&0\end{pmatrix}. \end{align}