Solved – First-order Markov chain, states transition probabilities at each time are enough for the model

markov-process

Hi markov chain specialist,

Hope you can give me an answer regarding this trellis diagram that i saw on a book. why in this picture of a general first-order markov chain of 2 states,we should know the prob. of each state at each time?

A general first-order markov chain, can be Time-dependent(non stationary) so Transition prob. can change by time. That's why we have E(time) in this picture at each time. but by having the initial Prob. of each state (P(t=1)), and Transition prob. i can calculate prob. of each state in any given time, so The state probabilities at each time (P(t=2,t=3,…)) are Extra informations. am i right or i ignore something?

enter image description here

pic.Statistics in Volcanology (google books) page 167 MArkov chain

Best Answer

[...] but by having the initial Prob. of each state (P(t=1)), and Transition prob. i can calculate prob. of each state in any given time [...]

That sounds correct. You should be able to use induction and conditioning to deduce this. It has been a while since I've done anything with Markov Chains, so apologies in advance for any poor notation (also my first post ever :D ). The base case for induction should look something like this.

Suppose that we know the initial probabilities as above. Then we have :

$P_1[2] = P(X_2=1|X_1=1)P(X_1=1)+P(X_2=1|X_1=2)P(X_1=2)= S_1\epsilon_{11}[1]+S_2\epsilon_{21}[1],$

which are all known quantities.

I think a simple induction argument should finish it from there.