Your statements are slightly confusing, since a transient chain could be considered "non null recurrent" (after all, it is not null recurrent). So you should replace your statements with "positive recurrent":
if all states of an irreducible Markov chain are positive recurrent, then the MC has a unique stationary distribution
if an irreducible Markov chain is finite, then all of its states are positive recurrent.
You can also say that if a Markov chain is irreducible and positive recurrent, then the (unique) stationary distribution has strictly positive components.
Every finite state irreducible Markov chain $\{M(t)\}_{t=0}^{\infty}$ has a unique stationary distribution $\pi =(\pi_i)_{i \in S}$ (where $S$ denotes the finite state space). When you simulate, with probability 1, the sample path fractions of time converge to this distribution, so that:
$$ \lim_{T\rightarrow\infty} \frac{1}{T}\sum_{t=0}^{T-1} 1\{M(t)=i\} = \pi_i \quad, \forall i \in S \quad \mbox{(with prob 1)}$$
regardless of the initial state $M(0)$. Taking expectations of both sides and using the bounded convergence theorem together with $E[1\{M(t)=i\}]=P[M(t)=i]$ we also get:
$$ \lim_{T\rightarrow\infty} \frac{1}{T}\sum_{t=0}^{T-1} P[M(t)=i] = \pi_i \quad, \forall i \in S$$
If the chain is finite state, irreducible, and also aperiodic, you can further say
$$ \lim_{n\rightarrow\infty} P[M(t)=i|M(0)=j] = \pi_i \quad, \forall i \in S$$
regardless of the initial state $j \in S$. So if the chain is finite state, irreducible, but limiting probabilities do not converge, then the chain cannot be aperiodic.
If $M(t)$ is finite state, irreducible, and periodic with period $d>1$, then limiting probabilities cannot converge (assuming we start in a particular state with probability 1). This is because:
\begin{align*}
\lim_{k\rightarrow\infty} P[M(kd)=i|M(0)=i] > 0 \\
\lim_{k\rightarrow\infty} P[M(kd+1)=i|M(0)=i] = 0
\end{align*}
This is because the Markov chain $\{Z(k)\}_{k=0}^{\infty}$ defined by $Z(k)=M(kd)$ is irreducible and aperiodic (over an appropriately reduced state space) and so all states it can reach have positive steady state values.
With this reasoning, it can be shown that $P[M(t)=i|M(0)=j]$ converges (as $t\rightarrow\infty$) to a periodic function with period $d$. The particular $d$-periodic function it converges to depends on the intial state.
This should be pretty clear if you do a few matlab examples: Plot $\vec{\pi}(t) = \vec{\pi}(0)P^t$ versus $t \in \{0, 1, 2, ...\}$ for some examples with $\vec{\pi}(0)=[1 , 0, 0, ...]$ or $\vec{\pi}(0)=[0, 1, 0, 0, ...]$.
Best Answer
That is a really convoluted way of describing a steady state in my opinion.
The set of potential states a system can be in is called $S$. For example, $S$ could be "how many animals are there in your socks".
They seem to be describing a system that proceeds in discrete steps (discrete time markov). So a step could be going from one day to the next.
$\pi_k(x)$ is the probability of being in state $x$ on step $k$. So there might be a 10% probability that you have 4 kittens in your socks on Tuesday: $\pi_{\text{Tuesday}}(4) = .1$.
$P(x,y)$ is the probability that you proceed from state $x$ to state $y$. So if you have 3 kittens in your socks on one day, there is a 14% chance that you'll have 5 the next day. $P(3,5) = .14$.
Every step has a probability associated with being in a certain state. If your socks can hold at most $5$ kittens, then $\pi(0) + \pi(1) + \pi(2) + \pi(3) + \pi(4) + \pi(5) = 1$. That's just basic probability, the sum of all possibilities is 100%.
Knowing the transition probabilities $P$, and the probability of states of a certain step $\pi_k()$, then you can calculate the probability of states in the next step, $\pi_{k+1}()$. Specifically, $\pi_{k+1}(y) = \sum_{x \in S} \pi_{k}(x)P(x,y)$. "The probability of having 2 kittens in your sockets is the probability of having 0 kittens times the probability of transitioning from 0 to 2 plus the probability of having 1 times the probability of transitioning from 1 to 2 plus ...".
When $\forall x ~~\pi_{k+1}(x) = \pi_k(x)$, then that $\pi$ is called a steady state. In that state, the transitions don't change the probabilty of each state from step to step.
Example:
You can have at most 2 kittens in your socks.
Then a steady state is :
$$\pi(0) = \frac{27}{128},\quad \pi(1) = \frac{15}{64},\quad \pi(2) = \frac{71}{128}$$
As you can check, using the given formula.