[Math] Dirac delta function with a sum as the argument

dirac deltaphysicsrandom walk

I'm reading "First steps in random walks" by Klafter and Sokolov, and I don't understand this step involving the Dirac delta function. They want to obtain the probability density of having a walker at $x$ after $n$ steps: $P_n(x)$. They begin with

\begin{equation}
P_n(x)=\int_{-\infty}^\infty dy P_{n-1}(y)p(x-y), \qquad (1)
\end{equation}
where $p(x)$ is the probability density to take a step of length $x$. Then they reiterate (1) and obtain (making a change of variables, I suppose)

\begin{equation}
P_n(x)=\int_{-\infty}^\infty\cdots \int_{-\infty}^\infty dx_1\cdots dx_{n-1} p(x_1)p(x_2)\cdots p(x-x_{n-1}), \qquad (2)
\end{equation}

and then they introduce the Dirac delta function as a "formal trick", they say:

\begin{equation}
P_n(x)=\int_{-\infty}^\infty\cdots \int_{-\infty}^\infty dx_1\cdots dx_n p(x_1)p(x_2)\cdots p(x_n) \delta \left(\sum_{i=1}^nx_i-x \right), \qquad (3)
\end{equation}

My question is: how can we show (heuristically) the equivalence between (2) and (3)? I'm used to Dirac delta functions of the form $\delta(x-x_0)$, but not of the form at (3).


EDITED

Maybe it would help to notice that $P_n(x)$ is the probability density of the random variable $X_n:=\sum_{i=1}^nx_i$.

Best Answer

From the get-go we should be able to tell the equation (2) is incorrect, since it equals

$$\left(\int_{-\infty}^\infty p(x_1)dx_1\right)\cdots\left(\int_{-\infty}^\infty p(x_{n-2})dx_{n-2}\right)\int_{-\infty}^\infty p(x_{n-1})p(x-x_{n-1})dx_{n-1} \tag{2-bad}$$

which is $1\cdots 1\cdot P_2(x)$ instead of the $P_n(x)$ it's supposed to be.


The correct form is something that can be intuited. Since step sizes are independent events, the probability density of having step sizes $(x_1,\cdots,x_n)$ should be $p(x_1)\cdots p(x_{n-1})p(x_n)$ which we should then integrate over the hyperplane defined by the equation $x_1+\cdots+x_n=x$ in order to obtain the probability the $n$ steps sum to $x$. Such a domain of integration can be parametrized by letting the variables $x_1,\cdots,x_{n-1}$ roam freely and then setting $x_n=x-(x_1+\cdots+x_{n-1})$, yielding

$$\int_{-\infty}^\infty\cdots\int_{-\infty}^\infty p(x_1)\cdots p(x_{n-1})p\big(x-(x_1+\cdots+x_{n-1})\big) dx_{n-1}\cdots x_1. \tag{2-good}$$


We can derive this from (1) using substitution if we wish. Observe

$$P_n(x)=\int_{-\infty}^\infty p(x-u_1)P_{n-1}(u_1)du_1=\int_{-\infty}^\infty\int_{-\infty}^\infty p(x-u_1)p(u_1-u_2)P_{n-2}(u_2)du_2du_1$$

$$\cdots =\int_{-\infty}^\infty\cdots\int_{-\infty}^\infty p(x-u_1)p(u_1-u_2)\cdots p(u_{n-2}-u_{n-1})P_1(u_{n-1})du_{n-1}\cdots du_1 $$

and of course $P_1=p$. Make a change of variables

$$\begin{bmatrix}x_1 \\ x_2 \\ \vdots \\ x_{n-2} \\ x_{n-1}\end{bmatrix}=\begin{bmatrix}u_1-u_2 \\ u_2-u_3 \\ \vdots \\ u_{n-2}-u_{n-1} \\ u_{n-1}\end{bmatrix}=\begin{bmatrix} 1 & -1 & 0 & \cdots & 0 & 0 \\ 0 & 1 & -1 & \cdots & 0 & 0 \\ 0 & 0 & 1 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 1 & -1 \\ 0 & 0 & 0 & \cdots & 0 & 1 \end{bmatrix} \begin{bmatrix}u_1 \\ u_2 \\ \vdots \\ u_{n-2} \\ u_{n-1}\end{bmatrix}.$$

The Jacobian determinant of this upper triangular matrix is $1$, and $u_1=x_1+\cdots+x_{n-1}$, so the integral becomes precisely the one written in $(\text{2-good})$.

Related Question