Markov Process – Dependence on Present and Past States

markov-process

I would just like someone to confirm my understanding or if I'm missing something.

The definition of a markov process says the next step depends on the current state only and no past states. So, let's say we had a state space of a,b,c,d and we go from a->b->c->d. That means that the transition to d could only depend on the fact that we were in c.

However, is it true that you could just make the model more complex and kind of "get around" this limitation? In other words, if your state space were now aa, ab, ac, ad, ba, bb, bc, bd, ca, cb, cc, cd, da, db, dc, dd, meaning that your new state space becomes the previous state combined with the current state, then the above transition would be *a->ab->bc->cd and so the transition to cd (equivalent in the previous model to d) is now "dependent" on a state which, if modeled differently, is a previous state (I refer to it as a sub-state below).

Am I correct in that one can make it "depend on previous states (sub-state)" (I know technically it doesn't in the new model since the sub-state is no longer a real state) maintain the markov property by expanding the state space as I did? So, one could in effect create a markov process that could depend on any number of previous sub-states.

Best Answer

Technically, both the processes you describe are markov chains. The difference is that the first one is a first order markov chain whereas the second one is a second order markov chain. And yes, you can transform a second order markov chain to a first order markov chain by a suitable change in state space definition. Let me explain via an example.

Suppose that we want to model the weather as a stochastic process and suppose that on any given day the weather can be rainy, sunny or cloudy. Let $W_t$ be the weather in any particular day and let us denote the possible states by the symbols $R$ (for rainy), $S$ for (sunny) and $C$ (for cloudy).

First Order Markov Chain

$P(W_t = w | W_{t-1}, W_{t-2},W_{t-3} ..) = P(W_t = w | W_{t-1})$

Second Order Markov Chain

$P(W_t = w | W_{t-1}, W_{t-2},W_{t-3} ..) = P(W_t = w | W_{t-1},W_{t-2})$

The second order markov chain can be transformed into a first order markov chain be re-defining the state space as follows. Define:

$Z_{t-1,t}$ as the weather on two consecutive days.

In other words, the state space can take one of the following values: $RR$, $RC$, $RS$, $CR$, $CC$, $CS$, $SR$, $SC$ and $SS$. With this re-defined state space we have the following:

$P(Z_{t-1,t} = z_{t-1,t} | Z_{t-2,t-1}, Z_{t-3,t-2}, ..) = P(Z_{t-1,t} = z_{t-1,t} | Z_{t-2,t-1})$

The above is clearly a first order markov chain on the re-defined state space. The one difference from the second order markov chain is that your redefined markov chain needs to be specified with two initial starting states i.e., the chain must be started with some assumptions about the weather on day 1 and on day 2.