A) You did not consider the case say (12) and (34) are in two seperate communicating class such that
$p_{12}=p_{21}=p_{34}=p_{43}=1/2$, then there is no unique stationary distribution. If there is an only 1 closed communicating class, $A$, and all other states are transient, what you said is true.
If we want to talk about one positive recurrent class instead of one irreducible chain, does this method of transferring the results ( by restricting the result to the subset A) always work?
Yes.
B) I do not really understand you are trying to pull here? Take an example of a chain with two communicating classes. A possible solution to $\pi = \pi^T P$ is setting 0 for all states in one class and find the unique distribution for the other class.
Then the set of equilibrium measures are linear combinations of these vectors. You cannot use $\pi = \pi^T P$ unless you know that there is only $1$ closed communicating class (by same reason as question 1)
A finite state Markov Chain has a unique stationary distribution iff it contains only one closed communicating class. (so what you said is true, if you assume there is just 1 closed communicating class, everything else is open)
C) Not as far as I know. To see a state is periodic, find two path of different length on which it returns itself such that the highest greatest common factor. This is normally quite easy for a simple Markov Chain.
D) consider something like
1 -> 2 <-> 3
then 2 and 3 are closed. 1 is open. You know that open and closed are class properties, so are transient and recurrence. In a finite chain, recurrence and closed are the same thing.
The only difference occurs in infinite state Chains.
E) For a uncountable states, you can just integrate $k$ times
For example
$$P(X_2\in A|X_0=x) = \int_{y\in\Omega}\int_{z\in A} q(x,y)q(y,z)dydz$$
where $\Omega$ is the entire state space. For $X_3$, you just need to integrate 3 times etc. Notice that summation (used for countable state) is just integral with respect to counting measure.
F) you stated the sufficient and necessary condition for recurrence. There is a standard textbook proof, and yes verifying this for 1 pair of $i, j$ is enough because recurrence and transience are class properties. In fact often it is verified for $i=j$.
What text are you referring to, exactly? Probability: Theory and Examples Fifth Edition by Rick Durrett? The book proves these theorems in detail, so what part of the proof are you having trouble with?
The theorem you are asking about is numbered 5.5.11, not 4.7 - perhaps you have an old edition of the text? The claim that if $p$ is irreducible and $i$ positive recurrent, then
$$
\pi_j = \frac{\sum_{n=0}^\infty \mathbb P(X_n=j, T_i > n\mid X_0=i)}{\mathbb E[T_i\mid X_0=i]}
$$
defines a stationary distribution is stated as part of theorem 5.5.12.
As for the statement in the title of the question, if $\pi$ is a stationary distribution then by definition it must sum to one. The other equality follows from Theorem 5.5.7 in the book, which states:
Let $i$ be a recurrent state, and let $T=\inf\{n\geqslant 1: X_n=i\}$. Then
$$
\mu_i(j) = \mathbb E\left[\sum_{n=0}^{T-1} \mathsf 1_{\{X_n=j\}}\right] = \sum_{n=0}^\infty \mathbb P(X_n=j,T>n\mid X_0=i)
$$
defines a stationary measure.
In the text, the author uses Fubini's theorem to show the above. The equality $\pi_i = \frac1{\mathbb E[T_i\mid X_0=i]}$ follows from summing $\mu_i$ over all states and using Fubini's theorem to show that the sum is in fact equal to $\mathbb E[T_i\mid X_0=i]$. Finally, the result that an irreducible and positive recurrent Markov chain has a unique stationary measure up to constant multiples is used. This is Theorem 5.5.9 in the text, and the proof requires a bit of finesse. From this it follows that $\pi_i = \frac{\mu_i(i)}{\mathbb E[T_i\mid X_0=i]}$, and since by definition $\mu_i(i)=1$ , we have $\pi_i = \frac{1}{\mathbb E[T_i\mid X_0=i]}$. Since this equality is true for each state, it remains true when summing over all states, and hence
$$
\sum_i \pi_i = \sum_i \frac{1}{\mathbb E[T_i\mid X_0=i]},
$$
as desired.
Best Answer
Here is a possible proof approach.
You say that you already know a transient state $i$ is almost surely visited only finitely many times. Can you show the stronger statement that the expectation of the number of visits to $i$ is finite?
Then, denoting the number of visits to $i$ as $N(i) = \sum_{k=0}^\infty 1(X_k=i)$, $$\mathbb{E}[N(i)] = \sum_{k=0}^\infty \mathbb{E}[1(X_k=i)] = \sum_{k=0}^\infty P(X_k=i) = \sum_{k=0}^\infty \pi_i$$ which can only be finite if $\pi_i=0$.