Markov Chains: Expected visit time when the visit probability is less than 1

expected valuemarkov chainsstochastic-processes

I'm rather new to Markov Chains and I was reading about them in the book "Randomized Algorithms" by Motwani and Raghavan when i stumbled into a somewhat confusing definition. I tried looking for similar questions but they all seem to refer to additional concepts and I would like to get a feedback on the book specific case to asses whether my understanding is correct.

Specifically I got confused reading the following extract about a the definition of $h$ the expected number of steps needed to get from a state $i$ to a state $j$. The scenario is that of discrete Markov Chains with a finite or countable number of states, associated to a transition matrix P (which is row-stochastic). I report here the relevant extract of the text.

enter image description here

As you can see $h$ follows a standard definition for the expected value, but I am not able to fully grasp the last sentence. To me it looks like it is a logical deduction the author does with respect to the definition of $h$, and not as a special case of the definition (but I may be wrong since English is not my primary language), and to me this does not seem to make sense.

Intuitively, it would make sense that whenever I have the possibility to reach a region that completely prevents me to get back to the state $j$ the expected time to get back to it will grow indefenetly. However, looking at the formula of $h$ I could rather easily come up with a example that seems to falsify the authors' claim. An example could be represented by the following graph where we could assume uniform probability for the outgoing edges (i.e. $1/2$ for all the edges except the self-loop).
enter image description here
In this case it appears to me that $f_{ij}$ would be equal to $1/2$,since I can get to it only half of the times when starting from $i$, and hence $f_{ij} < 1$, but $h_{ij} = 1/2 \neq \infty$ (since only $r_{ij}^{(1)}= 1/2 >0$, while $\forall \ t>1\ \ r_{ij}^{(t)}=0$ since it is impossible to end up in j for the first time in exactly $t>1$ steps when starting from $i$). In this case the issue would be that $r_{ij}^{(t)}$ becomes exactly $0$ for all the values $t>t_{0}$ for some $t_{0}$, preventing the contibution of infinitely many positive contributions in $h_{ij}$. In any case, even when there are infinitely many (but not all, since $f_{ij} < 1$) paths that lead from i to j, I guess we may end up with a convergent sum for $h_{ij}$, which would again prove against the definition.

So my doubt is mainly the following: is the definition of the book somewhat imprecise and the claim $f_{ij}<1 \implies h_{ij} = \infty$ is more of a convention that does not follow the standard case, or am I getting something wrong with my reasoning?
Any hint would be much appreciated.

Best Answer

As you correctly pointed out, $f_{ij}$ is defined as an infinite sum, so you are summing over infinitely many first passage/return probabilities (the $r_{ij}^{(t)}$), where there are possibly some transitions that might leave you in a loop never returning back to your desired $j$. Hence, if you get stuck in some sort of loop then you may never reach $j$ from $i$ in a finite number of steps (which is what the $h_{ij}$ counts).

Related Question