In a homogeneous Poisson process with rate $\lambda$, what is the probability of observing an event in an "instant," that is, an infinitesimally small interval of length dt? I have read that the Poisson rate function $\lambda(t)$ can be defined as the "instantaneous probability of observing a spike at each point in time." (http://www.stat.columbia.edu/~liam/teaching/neurostat-spr11/uri-eden-point-process-notes.pdf) But for a homogeneous process with $\lambda(t) = \lambda$, how can this be when it is possible that $\lambda > 1$?
Solved – Instantaneous Event Probability in Poisson Process
poisson process
Related Solutions
Let $t=T_F$. Conditional on the number of occurences $N=n$, the arrival times $t_1,t_2,\dots,t_N$ are known to have the same distribution as the order statstics of $n$ iid unif$(0,t)$ random variables. Hence, the likelihood becomes \begin{align} L(\lambda,t) &= P(N=n) f(t_1,t_2,\dots,t_N|N=n) \\ &= \frac{e^{-\lambda t}(\lambda t)^n}{n!}\frac{n!}{t^n} \\ &= e^{-\lambda t}\lambda^n. \end{align} for $t\ge t_n$ and zero elsewhere. This is maximised for $\hat t=t_n$ and $\hat\lambda=n/t_n$. These MLEs don't exist if there are no occurrences $N=0$, however. Conditional on $N=n$, again using the fact that $t_n$ can be viewed as an order statistic (the maximum) of $n$ iid unif$(0,t)$ random variables, $E(t_N|N=n)=\frac n{n+1} t$. Hence, the estimator $t^*=\frac {n+1}n t_n$ is unbiased for $t$ conditional on $N=n$ and hence also conditional on $N\ge 1$. A reasonable frequentist estimator of $\lambda$ might be $\lambda^* = n/t^* = \frac{n^2}{(n+1)t_n}$ but this does not have finite expectation when $N=1$ so assessing its bias is even more troublesome.
Bayesian inference using independent, non-informative scale priors on $\lambda$ and $t$ on the other hand leads to a posterior $$ f(\lambda,t|t_1,\dots,t_N) \propto e^{-\lambda t}\lambda^{n-1}t^{-1}. $$ for $t>t_n,\lambda>0$. Integrating out $\lambda$, the marginal posterior of $t$ becomes $$ f(t|t_1,\dots,t_N) = \frac{n t_n^n}{t^{n+1}}, t>t_n, $$ and the posterior mean $E(t|t_1,\dots,t_N)=\frac n{n-1} t_n$. A $(1-\alpha)$-credible interval for $t$ is given by $\left(\frac{t_n}{(1-\alpha/2)^{1/n}}, \frac{t_n}{(\alpha/2)^{1/n}}\right)$.
The marginal posterior of $\lambda$, \begin{align} f(\lambda|t_1,\dots,t_N) &\propto \int_{t_\text{max}}^\infty e^{-\lambda t}\lambda^{n-1}t^{-1} dt \\ &= \lambda^{n-1}\Gamma(0,\lambda t_n) \end{align} where $\Gamma$ is the incomplete gamma function.
The discussion in your book is not phrased correctly in some aspects, but first let me address your question about conditioning on an event of probability $0$; something that is explicitly forbidden in the definition of conditional probability in the earlier chapter of your book.
For jointly continuous random variables $X$ and $Y$ with joint pdf $f_{X,Y}(u,v)$, the conditional pdf of $Y$ given that $X = x$ is defined to be $$f_{Y\mid X}(v\mid X = u) = \begin{cases} \displaystyle \frac{f_{X,Y}(u,v)}{f_{X}(u)}, & \text{if }~f_{X}(u)>0,\\0, &\text{otherwise.}\end{cases}$$ where $f_X(u)$ is the (marginal) pdf of $X$. The conditional complementary CDF is $$1-F_{Y\mid X}(t\mid X = u) = P\{Y > t\mid X = u\} = \int_t^\infty f_{Y\mid X}(v\mid X = u) \,\mathrm dv$$ Now, in your application, $P\{X_2 > t\mid X_1 = s\}$ can be calculated directly since we are told that the first arrival occurred at $s$ and are being asked for the conditional probability that no arrivals have occurred in $(s,s+t]$. But, what happens in $(s,s+t]$ is independent of what happened in $(0,s]$ since the time intervals are disjoint. That is, $P\{\text{no arrivals in} ~ (s,s+t]\mid X_1=s\}$ is the same regardless of whether we assume that there was an arrival at $s$ or the first arrival occurred before time $s$, and so $$P\{X_2 > t\mid X_1 = s\} = P\{\text{no arrivals in} ~ (s,s+t]\} = e^{-\lambda t}.$$ and thus we get that the conditional pdf $f_{X_2\mid X_1 = s}(v\mid X_1 = s)$ is the same as the unconditional pdf $f_{X_2}(v) = \lambda e^{-\lambda v}, v > 0$. Conditionally or unconditionally, the distribution of $X_2$ is exponential with parameter $\lambda$. Furthermore, \begin{align} f_{X_2}(v) = f_{X_2\mid X_1 = s}(v\mid X_1 = s) = \displaystyle \frac{f_{X_1,X_2}(s,v)}{f_{X_1}(s)} \implies f_{X_1,X_2}(s,v) = f_{X_1}(s)f_{X_2}(v) \end{align} showing that $X_1$ and $X_2$ are independent (exponential random variables with parameter $\lambda$).
The answers to our specific questions are hidden somewhere in the above.
Best Answer
The instantaneous probability of observing a spike between $t$ and $t + dt$ is $\lambda(t)dt$ (mind the $dt$ term). This can be noticed directly from the definition. For example with the homogenous Poisson process: $$ P [(N(t+ \tau) - N(t)) = k] = \frac{e^{-\lambda \tau} (\lambda \tau)^k}{k!} $$
looking at $k = 1$ and $\tau = dt$ gives $P[dN(t)] = \lambda dt$.