Why is this stochastic process a Markov chain

conditional probabilitymarkov chainsprobabilityprobability distributionsstochastic-processes

Suppose we have an independent sequence $(X_n)$ such that $P(X_n=1)=P(X_n=0)=\cfrac{1}{2}$

Let $M_n=(X_{n-1},X_n)$.

The exercise I'm working on supposes that $(M_n)_{n \ge 2}$ is a markov chain but it didn't make sense to me because if we have for example

$M_4=(X_4,X_3)$ , $M_3=(X_3,X_2)$ , $M_2=(X_2,X_1)$

then $(M_n)$ is a markov chain iff

$P(M_4=m_4 | M_3=m_3,M_2=(1,1))=P(M_4=m_4 | M_3=m_3)$ but that didn't make sense to me since the value of $M_3$ also depends on $M_2$ since they share $X_2$. In other words, if we were to change $X_2$ in $M_2$ to $(0,1)$ then wouldn't $M_3$ change? What am I getting wrong exactly?

Edit:

Other than that problem, I was asked to give and graph the transition matrix $P$. How am I going to do that given that the state space is $E=\{(0,0),(0,1),(1,0),(1,1)\}$

Best Answer

Your ambiguity on the shared coordinates between $M_3$ and $M_2$ is understandable; they share $X_2$ and can no way be regarded as independent. But, note that both $M_3$ and $M_2$ are present as conditions in $P(M_4=m_4 | M_3=m_3,M_2=(1,1))=P(M_4=m_4 | M_3=m_3)$ and once this is done, the dependence between $M_3$ and $M_2$ vanishes. So we have $$ P\Big[M_4=(a,b) | M_3=(c,d),M_2=(e,f)\Big] \\{=\begin{cases} P[M_4=(a,b) | M_3=(b,d),M_2=(d,f)]&,\quad b=c \ \ \ ,\ \ d=e\\ 0&,\quad \text{otherwise} \end{cases} \\=\begin{cases} P[M_4=(a,b) | M_3=(b,d)]&,\quad b=c \ \ \ ,\ \ d=e\\ 0&,\quad \text{otherwise} \end{cases} } $$ Note that if the transition matrix for $M_i$ is requested, then you will end up with a $4\times 4$ matrix, since each $M_i$ can possess $4$ different levels. If we arrange these levels as $0:(0,0),1:(0,1),2:(1,0),3:(1,1)$, then we have $$ P=\begin{bmatrix} \frac{1}{2}&0&\frac{1}{2}&0\\ 0&\frac{1}{2}&0&\frac{1}{2}\\ \frac{1}{2}&0&\frac{1}{2}&0\\ 0&\frac{1}{2}&0&\frac{1}{2} \end{bmatrix}. $$

Related Question