For a finite MC it holds that
aperiodic + irreducible $\Leftrightarrow$ ergodic $\Leftrightarrow$ regular
as you expected. For an infinite MC it holds that
aperiodic + irreducible + positive recurrent $\Leftrightarrow$ ergodic,
and being "regular" in the infinite setting would require a more precise definition.
................................ explanations following ................................
For every finite or inifinite Markov chain (MC) it holds that
$aperiodic + irreducible + positive~recurrent \Leftrightarrow ergodic$.
See for example here for a proof. For every finite MC, irreducibility already implies positive recurrence, see here for a proof.
Further, for every finite MC we have that
$aperiodic + irreducible \Leftrightarrow regular$.
Proof sketch: the definition of a finite irreducible MC gives that $\forall i, j \in \Omega : \exists k > 0 : P^k[i,j] > 0$.
However, there might be no $k$ such that all entries are simultaneously positive - due to periodicities. But if the chain is additionally aperiodic, it follows that
$\exists k > 0 : \forall i, j \in \Omega : P^k[i,j] > 0$,
which matches your definition of being regular.
Finally, I don't see a canonical way how you would generalize the property "regular" to infinite Markov chains. So, I just ignore the term "regular" for infinite chains here.
A) You did not consider the case say (12) and (34) are in two seperate communicating class such that
$p_{12}=p_{21}=p_{34}=p_{43}=1/2$, then there is no unique stationary distribution. If there is an only 1 closed communicating class, $A$, and all other states are transient, what you said is true.
If we want to talk about one positive recurrent class instead of one irreducible chain, does this method of transferring the results ( by restricting the result to the subset A) always work?
Yes.
B) I do not really understand you are trying to pull here? Take an example of a chain with two communicating classes. A possible solution to $\pi = \pi^T P$ is setting 0 for all states in one class and find the unique distribution for the other class.
Then the set of equilibrium measures are linear combinations of these vectors. You cannot use $\pi = \pi^T P$ unless you know that there is only $1$ closed communicating class (by same reason as question 1)
A finite state Markov Chain has a unique stationary distribution iff it contains only one closed communicating class. (so what you said is true, if you assume there is just 1 closed communicating class, everything else is open)
C) Not as far as I know. To see a state is periodic, find two path of different length on which it returns itself such that the highest greatest common factor. This is normally quite easy for a simple Markov Chain.
D) consider something like
1 -> 2 <-> 3
then 2 and 3 are closed. 1 is open. You know that open and closed are class properties, so are transient and recurrence. In a finite chain, recurrence and closed are the same thing.
The only difference occurs in infinite state Chains.
E) For a uncountable states, you can just integrate $k$ times
For example
$$P(X_2\in A|X_0=x) = \int_{y\in\Omega}\int_{z\in A} q(x,y)q(y,z)dydz$$
where $\Omega$ is the entire state space. For $X_3$, you just need to integrate 3 times etc. Notice that summation (used for countable state) is just integral with respect to counting measure.
F) you stated the sufficient and necessary condition for recurrence. There is a standard textbook proof, and yes verifying this for 1 pair of $i, j$ is enough because recurrence and transience are class properties. In fact often it is verified for $i=j$.
Best Answer
For a Markov chain with $N<\infty$ states, the set $I$ of invariant probability vectors is a non-empty simplex in ${\mathbb R}^N$ whose extreme points correspond to the recurrent classes of the chain. Thus, the vector is unique iff there is exactly one recurrent class; the transient states (if any) play absolutely no role (as in Jens's example). The set $I$ is a point, line segment, triangle, etc. exactly when there are one, two, three, etc. recurrent classes.
If the invariant vector $\pi$ is unique, then there is only one recurrent class and the chain will eventually end up there. The vector $\pi$ necessarily puts zero mass on all transient states. Letting $\phi_n$ be the law of $X_n$, as you say, we have $\phi_n\to \pi$ only if the recurrent class is aperiodic. However, in general we have Cesàro convergence: $${1\over n}\sum_{j=1}^n \phi_j\to\pi.$$
An infinite state space Markov chain need not have any recurrent states, and may have the zero measure as the only invariant measure, finite or infinite. Consider the chain on the positive integers which jumps to the right at every time step.
Generally, a Markov chain with countable state space has invariant probabilities iff there are positive recurrent classes. If so, every invariant probability vector $\nu$ is a convex combination of the unique invariant vector $m_j$ corresponding to each positive recurrent class $j\in J$, i.e., $$\nu=\sum_{j\in J} c_j m_j,\qquad c_j\geq 0,\quad \sum_{j\in J}c_j=1.$$
This result is Corollary 3.23 in Wolfgang Woess's Denumerable Markov Chains.