A) You did not consider the case say (12) and (34) are in two seperate communicating class such that
$p_{12}=p_{21}=p_{34}=p_{43}=1/2$, then there is no unique stationary distribution. If there is an only 1 closed communicating class, $A$, and all other states are transient, what you said is true.
If we want to talk about one positive recurrent class instead of one irreducible chain, does this method of transferring the results ( by restricting the result to the subset A) always work?
Yes.
B) I do not really understand you are trying to pull here? Take an example of a chain with two communicating classes. A possible solution to $\pi = \pi^T P$ is setting 0 for all states in one class and find the unique distribution for the other class.
Then the set of equilibrium measures are linear combinations of these vectors. You cannot use $\pi = \pi^T P$ unless you know that there is only $1$ closed communicating class (by same reason as question 1)
A finite state Markov Chain has a unique stationary distribution iff it contains only one closed communicating class. (so what you said is true, if you assume there is just 1 closed communicating class, everything else is open)
C) Not as far as I know. To see a state is periodic, find two path of different length on which it returns itself such that the highest greatest common factor. This is normally quite easy for a simple Markov Chain.
D) consider something like
1 -> 2 <-> 3
then 2 and 3 are closed. 1 is open. You know that open and closed are class properties, so are transient and recurrence. In a finite chain, recurrence and closed are the same thing.
The only difference occurs in infinite state Chains.
E) For a uncountable states, you can just integrate $k$ times
For example
$$P(X_2\in A|X_0=x) = \int_{y\in\Omega}\int_{z\in A} q(x,y)q(y,z)dydz$$
where $\Omega$ is the entire state space. For $X_3$, you just need to integrate 3 times etc. Notice that summation (used for countable state) is just integral with respect to counting measure.
F) you stated the sufficient and necessary condition for recurrence. There is a standard textbook proof, and yes verifying this for 1 pair of $i, j$ is enough because recurrence and transience are class properties. In fact often it is verified for $i=j$.
Suppose $x=0$, then
$$\begin{align}
\mathbb{P}(X_t=0|X_0 = 0) &= \sum_{i=0}^\infty \mathbb{P}(X_{t-1}=i|X_0 = 0)\mathbb{P}(X_1=0|X_0 = i)\\
&= \dfrac{1}{4}\sum_{i=0}^\infty \mathbb{P}(X_{t-1}=i|X_0 = 0)\\
&= 1/4,
\end{align}, $$
so $x=0$ is recurrent since
$$\sum_n \mathbb{P}(X_n=0|X_0 = 0) = +\infty. $$
Now, all states communicate, so you can show that all points are recurrent.
For the periodicity, notice that $1\rightarrow 0\rightarrow 1$ is a path from $1$ to $1$ with positive probability of size 2 and $1\rightarrow 0\rightarrow 0\rightarrow 1$ is a path from $1$ to $1$ with positive probability of size 3, so $1$ has period 1 and as this process is irreducible, this chain is aperiodic.
Best Answer
I think a state-space diagram is, generally, a good way to go. There are other approaches, but for a problem like this -- one with just a few states -- I think this is by far the conceptually clearest way to go. (You said it was "very messy" -- I don't think I agree!)
I think you already know enough to answer the recurrence question. Hint: Would a random walk started at state 1 be guaranteed to eventually return to state 1? Why or why not?
If you want an approach that doesn't look at a state space diagram, you might consider this: if your matrix is $M$, then $M^{n}$ corresponds to the transition probabilities after taking $n$ steps. (This is the best part of representing a Markov chain by a transition matrix.) So, if your state space is aperiodic, then you can learn quite a lot about it by raising it to some large powers. If you can, I recommend raising your matrix to, say, the 100th power to see what you can see.