Your statements are slightly confusing, since a transient chain could be considered "non null recurrent" (after all, it is not null recurrent). So you should replace your statements with "positive recurrent":
if all states of an irreducible Markov chain are positive recurrent, then the MC has a unique stationary distribution
if an irreducible Markov chain is finite, then all of its states are positive recurrent.
You can also say that if a Markov chain is irreducible and positive recurrent, then the (unique) stationary distribution has strictly positive components.
Every finite state irreducible Markov chain $\{M(t)\}_{t=0}^{\infty}$ has a unique stationary distribution $\pi =(\pi_i)_{i \in S}$ (where $S$ denotes the finite state space). When you simulate, with probability 1, the sample path fractions of time converge to this distribution, so that:
$$ \lim_{T\rightarrow\infty} \frac{1}{T}\sum_{t=0}^{T-1} 1\{M(t)=i\} = \pi_i \quad, \forall i \in S \quad \mbox{(with prob 1)}$$
regardless of the initial state $M(0)$. Taking expectations of both sides and using the bounded convergence theorem together with $E[1\{M(t)=i\}]=P[M(t)=i]$ we also get:
$$ \lim_{T\rightarrow\infty} \frac{1}{T}\sum_{t=0}^{T-1} P[M(t)=i] = \pi_i \quad, \forall i \in S$$
If the chain is finite state, irreducible, and also aperiodic, you can further say
$$ \lim_{n\rightarrow\infty} P[M(t)=i|M(0)=j] = \pi_i \quad, \forall i \in S$$
regardless of the initial state $j \in S$. So if the chain is finite state, irreducible, but limiting probabilities do not converge, then the chain cannot be aperiodic.
If $M(t)$ is finite state, irreducible, and periodic with period $d>1$, then limiting probabilities cannot converge (assuming we start in a particular state with probability 1). This is because:
\begin{align*}
\lim_{k\rightarrow\infty} P[M(kd)=i|M(0)=i] > 0 \\
\lim_{k\rightarrow\infty} P[M(kd+1)=i|M(0)=i] = 0
\end{align*}
This is because the Markov chain $\{Z(k)\}_{k=0}^{\infty}$ defined by $Z(k)=M(kd)$ is irreducible and aperiodic (over an appropriately reduced state space) and so all states it can reach have positive steady state values.
With this reasoning, it can be shown that $P[M(t)=i|M(0)=j]$ converges (as $t\rightarrow\infty$) to a periodic function with period $d$. The particular $d$-periodic function it converges to depends on the intial state.
This should be pretty clear if you do a few matlab examples: Plot $\vec{\pi}(t) = \vec{\pi}(0)P^t$ versus $t \in \{0, 1, 2, ...\}$ for some examples with $\vec{\pi}(0)=[1 , 0, 0, ...]$ or $\vec{\pi}(0)=[0, 1, 0, 0, ...]$.
Best Answer
In order to get the stationary probabilities of the Markov process at stake we have to solve the following equation: $$[x_1\ x_2\ x_3 \ x_4 \ x_5...] \left[\begin{matrix}\frac{1}{2}&\frac{1}{2}&0&0&0&0&\dots\\ \frac{1}{3}&\frac{1}{3}&\frac{1}{3}&0&0&0&\dots\\ \frac{1}{4}&\frac{1}{4}&\frac{1}{4}&\frac{1}{4}&0&0&\dots\\ \frac{1}{5}&\frac{1}{5}&\frac{1}{5}&\frac{1}{5}&\frac{1}{5}&0&\dots\\ \vdots \end{matrix}\right]=\left[\begin{matrix}x_1\\ x_2\\ x_3\\ x_5\\ x_4\\ \vdots\end{matrix}\right]^T,$$ that is, we have to solve the following system of linear equations $$\begin{matrix} x_1=&\frac{1}{2}x_1+&\frac{1}{3}x_2+&\frac{1}{4}x_3+&\frac{1}{5}x_4+&\frac{1}{6}x_5...\\ x_2=&\frac{1}{2}x_1+&\frac{1}{3}x_2+&\frac{1}{4}x_3+&\frac{1}{5}x_4+&\frac{1}{6}x_5...\\ x_3=&0+\ \ &\frac{1}{3}x_2+&\frac{1}{4}x_3+&\frac{1}{5}x_4+&\frac{1}{6}x_5...\\ x_4=&0+\ \ &0+\ \ &\frac{1}{4}x_3+&\frac{1}{5}x_4+&\frac{1}{6}x_5...\\ x_5=&0+\ \ &0+\ \ &0+\ \ &\frac{1}{5}x_4+&\frac{1}{6}x_5...\\ \vdots \end{matrix}$$ From here we can see that $$x_2=x_1=c\gt0$$ and $$x_{n+1}=x_n-\frac{1}{n}x_{n-1}\text{ if }n>2$$ where $c$ is undefined yet.
This sequence, generated by the recursion above, tends to zero extremely fast. My guess is that for $$c=\frac{1}{e}$$ the sum of our numbers is $1$. That is if $c=1/e$ then we have the stationary distribution of the Markov process given by the OP.
Unfortunately I was not able to prove that the sum of our numbers is $e$ if $c=1.$ So I've posted the relating question. (The answer is available on MSE).
That is, we are done.
An interesting fact
Let's build a Markov automaton working according to the state transition matrix given here. No, it would not be that hard... It can even be told by words how this machine works. If the machine is in state one then it stays there with probability 1/2 and goes to the next state with the same probability. If in the second state then the machine either goes back to the first state or stays or jumps ahead to the third state; all these three possibilities are equally likely. Now if the machine is in the ${n}^{th}$ state then it will either go back to the first state or to the second state or to ... or stays or jumps ahead to the ${(n+1)}^{st}$ state.
Now, let's unleash the machine in one of its states. Then the relative frequency of the event that the machine visits the first state will approach $\frac{1}{e}.$ I've calculated the probability that the machine is wandering somewhere beyond its $15^{th}$ state. This probability is less than $3.0007\times{10}^{-13}.$