[Math] What’s the difference between a random variable and a measurable function

measure-theoryprobability theoryrandom variables

I've tried to wrap my head around the measure theoretical definition of a random variable for a couple of days now.

In his book Probability and Stochastics, Erhan Çinlar defines a measurable function as follows:

Let (E, ℰ) and (F, Ƒ) be measurable spaces [where ℰ and Ƒ are σ-algebras on the sets E and F respectively]. A mapping f : E ↦ F is
said to be measurable relative to ℰ and Ƒ if

f⁻¹B ∈ ℰ for every B in Ƒ.

Later, he defines a random variable as follows:

Let (Ω, H, ℙ) be a probability space. The set Ω is called the sample space; its elements are called outcomes. The σ-algebra H may be called the grand history; its elements are called events.

[…]

Let (F, Ƒ) be a measurable space. A mapping X : Ω ↦ F is called a
random variable taking values in (F, Ƒ) provided that it be measurable relative to H and Ƒ, that is, if

X⁻¹A = {X ∈ A} := {ω ∈ Ω : X(ω) ∈ A} is an event [i.e. ∈ H] for every A in Ƒ

Aside from using (Ω, H) instead of (E, ℰ), these definitions look pretty identical to me. What's the difference? Are all measurable functions on probability spaces random variables? (And why is it called random if it's deterministic?)

Best Answer

In the theory of probability as developed by Kolmogorov, random variables just are measurable functions (provided that the underlying space is a probability space, which is just a space with a normalized finite measure).

That's actually the key idea to Kolmogorov's theory. It allows us to rigorously encode everything that one did with elementary probability theory into measure spaces and measurable functions. In this formulation, things like expected value (and other moments) are just linear functionals which allows one to use all of the results of measure theory and functional analysis (such as inequalities, convergence results, etc.) as tools for probabilistic calculations.