[Math] Why the probability distribution of a uniform random variable is the Lebesgue measure

lebesgue-measuremeasure-theoryprobabilityprobability distributionsuniform distribution

Consider the random variable $X$ defined on the probability space $(\Omega, \mathcal{F}, P)$ distributed as a uniform on $[0,1]$.

The probability distribution function of $X$ is defined as a map
$$
p:\mathcal{B}(\mathbb{R})\rightarrow [0,1]
$$
such that, for any $E\in \mathcal{B}(\mathbb{R})$,
$$
p(E):=\mathbb{P}(X^{-1}(E))
$$
$p$ is a probability measure for $(\mathbb{R}, \mathcal{B}(\mathbb{R}))$ and $(\mathbb{R}, \mathcal{B}(\mathbb{R}), p)$ is a probability space.

Let $\mu$ be the Lebesgue measure on $(\mathbb{R}, \mathcal{B}(\mathbb{R}))$.

Why $p(E)=\mu(E)$ for any $E\subseteq [0,1]$? Is that by definition?


My attempt which I think is wrong is that: the probability density function of $X$ is
$$
f(t)=\begin{cases}
1 \text{ if $t$ $\in$ $[0,1]$}\\
0 \text{ otherwise}
\end{cases}
$$
We know that $f$ is the probability density function of $X$ with respect to $\mu$ meaning that
$$
f:=\frac{dp}{d\mu}
$$
Is this somehow related with having $p(E)=\mu(E)$ for any $E\subseteq [0,1]$?

Best Answer

This is more or less the definition of uniform distribution. Properties we certainly expect from a uniform (on $[0,1]$) random variable $X$ are that we want $\Bbb P([0,1])=1$ and $\Bbb P([a,b])=\Bbb P([a+c,b+c])$ whenever $0\le a\le b\le b+c\le 1$. Together with additivity, this already leads to $\Bbb P(X\in E)=\mu(E\cap[0,1])$.

Related Question