The idea is that the if there are $m$ independent random variables $X_i$ each uniformly distributed on $[0,1]$, so with mean $\frac12$ and variance $\frac1{12}$, then their sum $S_m$ has mean $\frac{m}2$ and variance $\frac{m}{12}$ while their average $\frac1m S_m$ has mean $\frac{1}2$ and variance $\frac{1}{12m}$
The Central Limit theorem says that $\sqrt{m}\left(\frac1m S_m - \frac{1}2\right)$ converges in distribution to $\mathcal N\left(0,\frac1{12}\right)$, i.e. $\dfrac{ S_m - \frac{m}2}{\sqrt{\frac{m}{12}}}$ converges in distribution to $\mathcal N(0,1)$ as $m$ increases
So for sufficiently large $m$, you might use $\dfrac{ S_m - \frac{m}2}{\sqrt{\frac{m}{12}}}$ to generate a random variable which has a distribution close to that of a standard normal random variable
If the day is of length $1$, and there are $n$ independent uniformly distributed start times then this is equivalent to breaking a stick into $n+1$ pieces and there is probably a duplicate question on this site.
The inter-arrival times are not independent but they are identically distributed (as is the time before the first arrival and the time after the last, so $n+1$ gaps overall). Jointly they have a uniform Dirichlet distribution.
Each gap actually has a $\text{Beta}(1,n)$ distribution so with CDF $1-(1-x)^n$ and density $n(1-x)^{n-1}$, with mean $\frac1{n+1}$ and variance $\frac{n}{(n+1)^2(n+2)}$. I have seen this called a "reverse power function distribution".
If the day is not length $1$ but $d$ then stretch these results so the CDF of the inter-arrival times is $1-\left(1-\frac xd\right)^{n}$ and density $\frac nd\left(1-\frac xd\right)^{n-1}$ and mean $\frac d{n+1}$ and variance $\frac{n d^2}{(n+1)^2(n+2)}$. For large $n$ this is close to an exponential distribution with rate $\frac{n+1}{d}$, as you might guess.
For example, here are the densities using R when $d=1$ and $n=100$, with the Beta distribution in red and exponential distribution in blue. They are very close to each other, and further right would be very close to $0$ (there is a slight distinction beyond $1$ as the Beta density would be exactly $0$ while the exponential density would be positive but less than $10^{−40}$). For larger n they would be even closer
n <- 100
curve(dbeta(x,1,n), from=0, to=6/n, col="red")
curve(dexp(x, n+1), from=0, to=6/n, add=TRUE, col="blue")
Best Answer
Here I confirm the claim about the given weak convergence to $U[0,1]$.
Moreover, I also present a upper bound for the convergence rate, which shows the underlying convergence is extremely rapid.
For simplicity, WLOG $\mu=0$
Let $(X_{\sigma} , \sigma \in \mathbb{R}_+)$ be the respective sequence of random variables, $f_{\sigma}$ be the density function of $\{ X_{\sigma} \}$
For any positive number $\sigma $ and integer $n$, we have: $$\int_{0}^1 f_{\sigma}(t)e^{-2i\pi n t} dt= \mathbb{E}(e^{-2i\pi n X_{\sigma}}) = e^{-2\pi^2 n^2 \sigma^2}$$ Because $\sum_{n \in \mathbb{Z}} \left| e^{-2\pi^2 n^2 \sigma^2}\right|^2<\infty$, so according to Riesz-Fischer theorem, we have that $$f_{\sigma} \in L^2([0,1])$$ And this gives ,by Perseval's identity: $$\int_{0}^1 |f_{\sigma}(t)-1|^2dt=\sum_{n \in \mathbb{Z}} \left| e^{-2\pi^2 n^2 \sigma^2}-\mathbb{1}_{\{n=0\}}\right|^2=\sum_{n \ge 1}2e^{-4\pi^2n^2\sigma^2}\xrightarrow[]{\sigma \rightarrow \infty} 0$$ Hence, $$\{ X_{\sigma} \} \xrightarrow[\sigma \rightarrow \infty]{\text{(d)}} \mathcal{U}([0,1])$$ And in particular,
For any function bounded measurable function $g$, we imply the rate of convergence
$$\left| \mathbb{E}( g( \{X_{\sigma}\}))-\int_0^1 g(x)dx \right| \le \|g\|_{\infty}e^{-2\pi^2 \sigma^2}\sqrt{\frac{2}{1-e^{-4\pi^2 \sigma^2}}}$$
The rate of convergence is the exponential of minus $\sigma^2$, hence the rate is extremely rapid. $\square$
Side note Using density argument (replace $X$ in the following statement with any random variable whose density function is in $\mathcal{C}^{2}_{c}$), we can deduce an even more generalized result