Determining the stable (long term) state probabilities
The solutions of the following system of equations are the stationary state probabilities:
$$\begin{matrix}[\pi_A\ \pi_B \ \pi_C]\\
\\
\\
\end{matrix}
\begin{bmatrix}
0.2 & 0.4 & 0.4 \\
0.4 & 0.3 & 0.3 \\
0.6 & 0.2 & 0.2 \\
\end{bmatrix}
\begin{matrix}=[\pi_A\ \pi_B \ \pi_C]\\
\\
\\\end{matrix}
$$
and
$$ \pi_A+\pi_B+\pi_C=1 .$$
Note
Here $\pi_A\not=1$. $\pi_A$ is a hypothetical quantity an intuitive definition of which (together with that of $\pi_B$,$\pi_C$) is as follows: "If we started the system choosing a state randomly with probabilities $\pi_A,\pi_B, \pi_C$ then the probability that the next state would be $A$, $B$, or $C$ would be $\pi_A,\ \pi_B, \pi_C$, respectively." Such stationary probabilities do not necessarily exists. You can read about the details here.
Continued
Having performed the vector-matrix multiplication we arrive at a redundant system of equations. After omitting the redundant equation and after some rearrangement we have
$$\begin{matrix}
-0.8\pi_A & +& 0.4\pi_B & +& 0.6\pi_C &=0 \\
\ \ 0.4\pi_A & -& 0.7\pi_B& +& 0.2\pi_C&=0 \\
\pi_A&+&\pi_B&+& \pi_C&=1&
\end{matrix}$$
The solution, with the help of Alpha, is $$\pi_A=\frac{5}{13}, \ \pi_B=\frac{4}{13},\ \pi_C=\frac{4}{13} .$$
Question a.
We can toss heads in each states, so
$$P(\text{heads})=P(\text{heads}|\text{in state} A)\pi_A+P(\text{heads}|\text{in state}B)\pi_B+P(\text{heads}|\text{in state}C)\pi_C=\frac{2}{10}\frac{5}{13}+\frac{4}{10}\frac{4}{13}+\frac{6}{10}\frac{4}{13}=\frac{5}{13}=\pi_A.$$
That is, the result of the tossing is a head in $\frac{5}{13}100$% of the time on the long run.
Remark
It seemed to be accidental that
$$P(\text{heads})=\pi_A.$$
I wondered, if it was? So I took $p_A, p_B, \text{ and }p_C$ as the probabilities of the heads in the states $A, B, \text{ and } C$, respectively.
In order to get the stationary probabilities we have to solve the following system of equations:
$$\begin{matrix}
p_A\pi_A & +& p_B\pi_B & +& p_C\pi_C &=\pi_A \\
\frac{1-p_A}{2}\pi_A & +& \frac{1-p_B}{2}\pi_B& +& \frac{1-p_C}{2}\pi_C&=\pi_B \\
\pi_A&+&\pi_B&+& \pi_C&=1&
\end{matrix}.$$
The solutions are
$$
\begin{matrix}
\pi_A=\frac{p_B+p_C}{-2p_A+p_B+p_C+2}\\
\pi_B=\frac{1-p_A}{-2p_A+p_B+p_C+2}\\
\pi_C=\frac{1-p_A}{-2p_A+p_B+p_C+2}
\end{matrix}.
$$
At this point, note that $p_A<1$. If $p_A$ was $1$ then the Markov chain would never get out of state $A$. Then, of course,
$$1=P(\text{heads})=\pi_A.$$
If $p_A<1$ then the solutions above are valid and we may compute the probability of the heads in general. Watch this:
$$P(\text{heads})=\frac{(p_B+p_C)p_A+(1-p_A)p_B+(1-p_A)p_C}{-2p_A+p_B+p_C+2}=\pi_A \ !$$
Perhaps this is what the OOOOOP wanted to see.
Question b.
We have to go back to the initial state which is $A$. Now, the following vector-matrix product will give the probabilities that we end up in states $A$,$B$, or $C$ at the $\text{n}^{th}$ minute:
$$\begin{matrix}[1\ 0 \ 0]\\
\\
\\\end{matrix}
\begin{bmatrix}
0.2 & 0.4 & 0.4 \\
0.4 & 0.3 & 0.3 \\
0.6 & 0.2 & 0.2 \\
\end{bmatrix}^n\begin{matrix}=[\pi_A^n\ \pi_B^n \ \pi_C^n]\\
\\
\\\end{matrix}
$$
Finally, we have to repeat the steps performed during solution a. But this time we work with the results of the vector-matrix product above with $n=3$ and with $n=10$.
Your first thought here would have been that if $X_n$ is to be shown as a Markov chain, the $X_n$ depends on $X_{n-1}$ only, so we have four states since $X_{j}$ can be any one of $\{0,1,2,3\}$. However, the problem here is that we don't quite have this situation : $X_n$ does not depend only on $X_{n-1}$, but also on $\mathit n$, since if $n$ is odd we are taking $X_n$ as the number of heads, and otherwise as the number of tails.
Therefore, $X_n$ depends only on $X_{n-1}$, but the manner of dependence (i.e. the probabilities $P(X_n = j | X_{n-1} = i)$ for $i,j \in \{0,1,2,3\}$) is non-homogenous, since it depends on whether $n$ is odd or even, right?
We want our Markov chain to be homogenous (i.e. the transition matrix needs to be $n$-independent, which we saw is not the case), which is why we need to add further states. Further, we know what needs to happen : if we just specify whether or not $X_n$ is to count heads or tails, then we know the distribution of $X_n$ given $X_{n-1}$. Since this information is not evident from $X_{n-1}$ itself, we need to create states which reflect whether $n$ is odd or even, so that then , while finding $X_n$, $X_{n-1}$ will also contain the information about whether $n$ is odd or even, based on which we can find $X_n$ after the $n$th round of tossing.
Therefore, the idea is to create states so that each state should contain information about two things :
The first has four possibilities, $0,1,2,3$. The second is just "count heads next round" or "count tails next round". This gives rise to $8$ states.
Here are examples of states :
($3$, "count heads next round")
($1$, "count tails next round")
($2$, "count heads next round")
Now, for the transition matrix. Unfortunately, there are plenty of possible transitions. Let me take you through a few examples, so that you can work out the rest on your own.
Compute the transition from ($0$,"Count tails next round") to ($3$, "count tails next round") : I claim this is zero. Why? Note that I am counting tails and heads alternately, so if I count tails in one round, in the next round I have to count the heads. Therefore, from here, you conclude that this transition probability is zero, since you cannot be counting tails in consecutive rounds.
The transition from ($2$, "Count heads next round") to ($1$, "Count heads next round") is also zero for similar reasons to above.
The transition from ($1$, "Count tails next round") to ($3$, "Count heads next round") : By the description of the experiment, if we are counting tails in the next round, then we counted heads in this round. So, this time while tossing the coins, we got $1$ head and two tails. Now, the two tails get counted for the next round, since we are counting tails, and we have to flip the coin which came heads, according to the experiment. The probability of getting $3$ tails, is then the probability that this coin comes tails, which is just $\frac 12$. So the transition probability is just $\frac 12$.
The transition from ($2$, "Count heads next round") to ($0$, "Count tails next round") : By the description of the experiment, if we are counting heads in the next round, then we counted tails in this round. So, this time while tossing the coins, we got $2$ tails and one head. Now, the one head get counted for the next round, according to the experiment. But then, we already have more than $0$ heads from counting the previous head, so it is not possible to get $0$ heads after the next round! So the probability is zero.
The transition from ($2$, "Count heads next round") to ($2$, "Count tails next round") : By the description of the experiment, if we are counting heads in the next round, then we counted tails in this round. So, this time while tossing the coins, we got two tails and one head. Now, the one head get counted for the next round, since we are counting heads, and we have to flip the two coins which came tails, according to the experiment. The probability of getting $2$ heads, is then the probability that exactly one of the two coins comes heads, which is just $\frac 12$. So the transition probability is just $\frac 12$.
Try to compute the following and get back :
Transition from ($0$,"count heads next round") to ($2$,"count heads next round")
Transition from ($1$, "count tails next round") to ($3$, "count heads next round")
Transition from ($3$, "count heads next round") to ($1$, "count tails next round")
If it is right, then I will say you have got the grasp.
Best Answer
As I wrote above, there are eight states, which can be represented in this graph:
The numbers in each vertex describe the number of heads, then tails and whether the heads or the tails should be counted for the next transition. (Be sure the transitions away from any vertex sum to 1.0.)
It is a simple matter to label each edge by the probabilities of transition (head counting to tail counting or vice versa).