Stochastic Processes – Concept of Random Walk

stochastic-processes

Reading the Wikipedia page for Random Walk, I was wondering what is the definition for a general random walk as a random process, so that all the concepts such as random walk on $\mathbb{Z}^d$, Gaussian random walk and random walk on a graph can fall into this general definition?

(1) In Ross's book for stochastic process, he defined a random walk to be the sequence of partial sums of a sequence of i.i.d. real-valued random variables.

So is a Gaussian random walk the sequence of partial sums of a sequence of i.i.d. real-valued random variables with a common Gaussian distribution?

How does a random walk on a graph fit into Ross's definition?

(2) I was thinking maybe the concept of random walk is equivalent to that of discrete-time Markov process, where the state space can be either continuous or discrete or even non-numeric (such as for a random walk on a graph)? Is it correct?

So can I say the graph in "a random walk on a graph" is just the graphical model for the distribution of the random walk as a stochastic process?

Also the graph in "a random walk on a graph" has nothing to do with random graph, right?

(3) if you can come up with a definition for random walk, please also explain how various special kinds of random walks fit into the definition you provided.

Thanks and regards!


EDIT:

Thanks to Dr Schmuland for the reply!

Although I have chosen the best answer, I am still not clear about the following questions:

(1) Is a random walk (including random walk on a group such as $\mathbb{Z}^d$ and random walk on a graph) equivalent to a discrete-time Markov process? If not, what makes random walks different from general discrete time Markov processes?

(2) For a random walk defined on a group such as $\mathbb{Z}^d$, does it really require the increments between every two consecutive indices be i.i.d. (seems to be the definition in Ross's Stochastic processes) or just independence will be enough and identical distribution is not necessary?

Thanks!


Reply to Update of Dr Schmuland

Thanks, Dr Schmuland, for providing a big picture that I have not clearly realized before! It took me some while to understand. Learning more makes me feel better.

Generally when the state space is additive group, are these two properties – markov property and increment-independence – equivalent? Must time-homogeneity be defined only for Markov process? If it can be define for a general process, is it equivalent to the property that the increments over same length period have the same distribution?

I also wonder if, for a random walk on a graph, at each vertex, the transition probability must be uniform? If yes, is the transition probability uniform at a vertex over a set of the neighbours and itself or just over a set of its neighbours (itself excluded, which means it is impossible to stay at the vertex)?

As your last exercise, I think time-homogeneity and identical-increment-distribution are equivalent, and Markov property and increment-independence are equivalent, so every time-homogeneous Markov chain will be a random walk on the group {1,2,3}. But my answer will be different for it to be a random walk on the graph derived for {1,2,3} as you mentioned, the transition probability from a vertex to other vertices has to be uniform.

Best Answer

It's a good question. The naive answer is that the two concepts are different.

Random walks on ${\mathbb Z}^d$ or ${\mathbb R}$, etc. are examples of a random walk on an additive group $G$. There is a fixed distribution $\mu$ on $G$, and at each time you randomly select an element of the group (using distribution $\mu$) and add it to the current state.

In contrast, for a random walk on a (locally finite) graph, you randomly choose an edge uniformly and follow that edge to the next state (node).

However, the concepts may be related at a deeper level. The first chapter of Woess's Random Walks on Infinite Graphs and Groups explains how a random walk on a graph is related to a random walk on a quotient of its automorphism group, and also how a random walk on a group is related to a walk on its Cayley graph.

I haven't read the book in detail, but my impression is that, nevertheless, the two concepts are not exactly equivalent.


Update (Nov. 7 2010)

Any stochastic process $(X_n)$ taking values in a countable state space can be built up out of the following pieces:

(1) an initial distribution $P(X_0=x)$ that tells us how to choose the starting point.

(2) transition kernels that tell us how to choose the next state given the history of the process up to the present, that is, $P(X_{n+1}=x_{n+1} | X_n=x_n, \dots, X_0=x_0)$.

Now, a process is called Markov if the kernels only depend on $n$, $x_{n+1}$ and $x_n$, that is, $P(X_{n+1}=x_{n+1} | X_n=x_n, \dots, X_0=x_0) = P(X_{n+1}=x_{n+1} | X_n=x_n)$. This is a big assumption and a huge simplification. But enough models of real world random behaviour have this "memoryless" property to make Markov processes wide spread and extremely useful.

A further simplification is to assume that the Markov process is time homogeneous, so that the transition rules only depend on the states and not on time. That is, we assume that $P(X_{n+1}=y | X_n=x)=P(X_1=y | X_0=x)$ for all $x,y$. The crucial point here is that $p(x,y):=P(X_{n+1}=y | X_n=x)$ is the same for all $n$.

Let's now assume that the state space is an additive group. The transition kernels for a general process can be rewritten in terms of increments: $$P(X_{n+1}=x_{n+1} | X_n=x_n, \dots, X_0=x_0)$$ can be written $$P(X_{n+1}-X_n=x_{n+1}-x_n | X_n-X_{n-1}=x_n-x_{n-1}, \dots, X_0=x_0).$$ If the increments are independent, then this becomes $$P(X_{n+1}-X_n=x_{n+1} -x_n)=:\mu_n(x_{n+1}-x_n).$$ In this case, the process is automatically Markov and the transition kernels are an arbitrary sequence $(\mu_n)$ of probabilities on the state space.

Finally, for a random walk we assume both time homogeneity and independent increments. Thus, the transition kernel $p(x,y)$ is represented by a single measure: $p(x,y)=\mu(y-x)$. This imposes a kind of homogeneity in space and in time. We are down to an extremely special, yet still interesting process, that only depends on the initial distribution and the single measure $\mu$. This is why Ross insists that for random walks the increments are identically distributed; just being independent is not enough.

Because random walks are so special, they can be analyzed by mathematical tools that don't work for most Markov processes. In particular, tools from harmonic analysis are very fruitful here. A standard reference, though a little old, is Principles of Random Walk (2nd ed) by Frank Spitzer. A more recent treatment is given in Random Walk: A Modern Introduction by Gregory Lawler and Vlada Limic (2010).

Here's an exercise to convince you that random walks are only a small part of all Markov chains. Take the state space to be the corners of a triangle, labelled $\{ 0, 1, 2\}$, considered as the group ${\mathbb Z}$ mod 3. Any 3 by 3 stochastic matrix (non-negative entries and row sums all equal to 1) defines a time homogeneous Markov chain on this state space. Which of these are random walks?

Related Question