Another thing you can do, which makes the procedure completely general - in addition to the four equations you get from
$$v(P-I) = 0$$
Where $P$ is your transition probabilities matrix above, you also need -
$$v_1 + v_2 +v_3 + \dots = 1$$
So, just add a column to your $P-I$ matrix with all ones. This new matrix has n rows and n+1 columns. Let's call it $Q$. And your right hand side vector becomes $b = [0, 0, \dots, 1]$, meaning repeat zeros until n and then add a 1. Now you get -
$$vQ = b$$
Post multiply both sides by $Q^T$
$$vQQ^T = bQ^T$$
Which leads to -
$$v = bQ^T(QQ^T)^{-1}$$
Here, $v$ is a column vector. Most linear algebra solvers expect the inverse matrix first multiplied by the vector, which is not the form of the result above. To convert it into this format (so it can be easily passed to these solvers), simply take the transpose.
$$v^T = (Q^T Q)^{-1}(Qb^T)$$
However, note that $Qb^T$ is simply a vector of ones (since all but the last column of $b$ is zero and the last row of $Q^T$ is all ones). So, no need to explicitly calculate $Qb^T$ it is simply a vector of ones. Here is a python method that does this for you:
import numpy as np
def steady_state_prop(
p=np.matrix([
[0,.2,.4,.4],
[.2,0,.4,.4],
[.2,.3,0,.5],
[.3,.4,.3,0]
])):
dim = p.shape[0]
q = (p-np.eye(dim))
ones = np.ones(dim)
q = np.c_[q,ones]
QTQ = np.dot(q, q.T)
bQT = np.ones(dim)
return np.linalg.solve(QTQ,bQT)
According to me you are asked to find the transition probabilities of the reversed Markov chain.
Let us denote the Markov chain that you have by $(X_t, t\ge0)$. Its transition probability matrix is $B=(b_{ij})$ i.e. $b_{ij}=P(X_t=j|X_{t-1}=i)$ for any $t$.
You are also given the steady state distribution $\vec \pi=(\pi_1,\pi_2,...)$ of the chain $(X_t, t\ge0)$.
The Markov property is stated as “the future is independent of the past given the present state”. Is can be restated as “the past is independent of the future given the present state”. But this means that the process in reverse time, is still a Markov chain. So for your Markov chain $(X_t, t\ge0)$ there exists a reversed version Markov chain, say $(X^*_t, t\ge0)$, which goes back in time.
This argument can be made more clear and strict but it is not the proper place here to do it (check the chapters on Reversible Markov chain in the classic books on queueing theory and markov chains. For example, Stochastic Modeling and the Theory of Queues by Ronald W. Wolff).
Denote the transition probabilities of $(X^*_t, t\ge0)$ by $b^*_{ij}$. So what you are asking, is how to compute $b^*_{ij}$.
The theory says that $b^*_{ij}$ can be exactly computed in terms of $b_{ij}$ and $\pi$ in the following way:
$$
b^*_{ij}={\pi_j \over \pi_i}b_{ji}.
$$
Best Answer
In the steady-state, the vector of probabilities multiplied by the transition matrix has to result in the vector of probabilities. Therefore, you have to look for the Eigenvector of the matrix associated to Eigenvalue 1.
A brute-force solution is to use Excel or pen and paper and multiply some assumed initial vector of probabilities with the matrix. If you repeat that a number of times with the resulting vector as new input, you end up at:
A related question/discussion is here.