Let $\mu$ be the reference measure of the Poisson process. By definition, this means that the number of arrivals falling into any $n$ mutually disjoint sets $B_1,\ldots,B_n$ has the same distribution as independent Poisson random variables with means $\mu(B_1),\ldots,\mu(B_n)$.
Specializing to the case of a Poisson process on $[0,\infty)$ (so that the interarrival times are well-defined) we see that if $T$ denotes the first arrival, then
$$\mathbb P(T>t)=\mathbb P(0\text{ arrivals in }[0,t])=e^{-\mu([0,t])},$$
where the last equality uses the probability of a Poisson variable with mean $\mu([0,t])$ being zero.
You can see from this that if $\mu$ is not a constant multiple of Lebesgue measure, then $T$ is no longer an exponential random variable.
The interarrival times are no longer independent in general, since if $T'$ denotes the next interarrival time after $T$ then
$$
\mathbb P(T'>t\mid T)=e^{-\mu([T,T+t])}.
$$
Whenever $\mu$ is not a constant multiple of Lebesgue measure, this conditional probability depends on $T$, and thus $(T,T')$ are not independent.
The joint distribution of $(T,T')$ satisfies
$$
\mathbb P(T'>t,T>s)=\mathbb E[e^{-\mu([T,T+t])};T>s],
$$
you can not go too much farther without knowing more about the reference measure $\mu$.
Kallenberg's textbook is a good reference for Poisson process questions.
Well I solved the question while typing.
But in short, my experiment with sampling numbers and accepting if above a certain treshold is the geometric distribution.
The geometric is the discrete version of the exponential distribution however I dont feel confident explaining exactly how they are connected, though there are many ressources on that.
They indeed look very similar in this picture (100_000 realisations)
Blue is my experiment, Orange is geometric, with $p=1-y$.
Best Answer
Let $X_1,\ldots,X_n$ be independent with $\mathrm{Expo}(\lambda)$ distribution. We compute the density of $X_1+X_2$ by convolution: \begin{align} f_{X_1+X_2}(t) &= f_{X_1}\star f_{X_2}(t)\\ &= \int_{\mathbb R} f_{X_1}(\tau)f_{X_2}(t-\tau)\ \mathsf d\tau\\ &= \int_0^t \lambda e^{-\lambda \tau}\lambda e^{-\lambda(t-\tau)}\ \mathsf d\tau\\ &= \lambda^2 e^{-\lambda t} \int_0^t\ \mathsf dt\\ &= \lambda ( \lambda t)e^{-\lambda t}\mathsf 1_{(0,\infty)}(t). \end{align}
Assuming now that $S_n:=\sum_{k=1}^n X_k$ has density $\frac{\lambda (\lambda t)^{n-1} e^{-\lambda t}}{(n-1)!}$, we compute the density of $S_{n+1} = S_n + X_{n+1}$ again by convolution: \begin{align} f_{S_{n+1}}(t) &= f_{S_n}\star f_{X_{n+1}}(t)\\ &= \int_0^t \frac{\lambda (\lambda \tau)^{n-1} e^{-\lambda \tau}}{(n-1)!}\lambda e^{-\lambda (t-\tau)}\ \mathsf d\tau\\ &=\frac{\lambda^2e^{-\lambda t}}{(n-1)!}\int_0^t (\lambda\tau)^{n-1} \ \mathsf d\tau\\ &= \frac{\lambda^2e^{-\lambda t}}{(n-1)!} \frac{(\lambda t)^n}{\lambda n}\\ &= \frac{\lambda(\lambda t)^n e^{-\lambda t}}{n!}, \end{align} so the density is of this form for all positive integers $n$ by mathematical induction.