True for finite state systems
If the chain has a finite state space $S$, and if there is only one closed communicating class, then there is a unique stationary distribution (satisfying $\pi = \pi P$) and for each $i \in S$, the fraction of time being in state $i$ converges to $\pi_i$ with probability $1$, regardless of the initial state. This is because the Markov chain bounces around until it eventually hits a state in the closed communicating class, then behavior is as if it were an irreducible chain on that reduced state space.
If there are two or more closed communicating classes then there are multiple stationary distributions and time average behavior depends on which communication class the initial state is in.
Counter-example for countably infinite state systems
This is not necessarily true for countably infinite state spaces, even if there exists a stationary distribution $\pi$. Consider the example state space $S = \{0, 1, 2, 3, ...\}$ with transitions:
\begin{align}
P_{00} &= 1\\
P_{i,i+1} &= 1-2^{-i} \quad \forall i \in \{1, 2, 3, ...\} \\
P_{i,0} &= 2^{-i} \quad \forall i \in \{1, 2, 3, ...\}
\end{align}
There is a single closed communicating class, consisting of the single state $\{0\}$ (all other states are transient). There is a single stationary distribution $\pi$ that solves $\pi = \pi P$, namely,
$$ \pi = (\pi_0, \pi_1, \pi_2, \pi_3, ...) = (1, 0, 0, 0, ...) $$
Further, it is possible to get to state 0 from any other state.
However, if we start in state $1$, the probability that we never visit state 0 is
$$ \Pi_{i=1}^{\infty}(1-2^{-i}) > 0 $$
and so the fraction of time that we are in state $0$ does not converge to $\pi_0=1$ with probability 1.
First of all, what you need to do is recall the Markov property. The Markov property tells you the following for any natural number $n$, and $k , x_1,...,x_n$ in the state space:
$$
P(X_{n+1} = k | X_1 = x_1,...,X_{n-1} = x_{n-1}, X_n = x_n) = P(X_{n+1} = k | X_n = x_n)
$$
That is, only "the present" determines the future : the future is independent of the past at each point.
To make use of this, since you have multiple $X_i$ on the left side of the $|$, you must break it up using the fact $$P(A \cap B | C) = \frac{P(A \cap B \cap C)}{P(C)} = \frac{P(A \cap B \cap C)}{P(B \cap C)}\frac{P(B \cap C)}{P(C)} = P(A | B,C)P(B|C)$$
(Note that $B,C$ is the same as $B \cap C$, it is just that it is notationally easier on the right side of the $|$ to write the comma)
So we start with $K = P(X_3 = 3 ,X_2= 1| X_1 = 2, X_0 = 2)$. Let us do this break up for $A \to X_3 = 3$ and $B \to X_2 = 1$ and $C \to X_1 = 2 , X_0 = 2$ :
$$
K = P(A\cap B| C) = P(A | B,C)P(B|C) \\
= \color{green}{P(X_3 = 3 | X_2 = 1 , X_1 = 2 , X_0 = 2)}\color{blue}{P(X_2 = 1 | X_1 = 2 , X_0 = 2)}
$$
Now, use the Markov property on the green expression.You get $\color{green}{P(X_3 = 3 | X_2 = 1)} = 0.3$ since this is the transition probability from state $1$ to state $3$.
Similarly ,the blue expression is $\color{blue}{P(X_2 = 1 | X_1 = 2)} = 0.4$ since this is the transition probability from state $2$ to state $1$.
Their product is $0.12$, as desired.
Best Answer
I am not a specialist in invariant distributions but so I hope there will be other answers - just it's to much to write as a comment.
As Sasha wrote you, in there exists an invariant measure $\pi = [0.5,0.5]$ and you can easily check that it works for all $0\leq a,b\leq1$.
The chain will not converge to the stationary distribution since it is periodic with a period $2$. Although the chain does not converge to the stationary distribution, still it exists (since the convergence is sufficient, not necessary condition). You may want to take a look at the notion of ergodicity here.
The chain will depend on the initial condition, say in the case $a=1,b=0$ - then you have to absorbing states, and wherever you start, you stay there forever.
Finally, chaos has no a strict definition (which will order it). To be precise, that statement from the Hypertextbook is not a definition of the chaos. You may say that the chain I told in 3. exhibits the dependence on the initial data, though you may think that non-ergodic chains are chaotic since they exhibit the dependence on the initial distribution.