Solved – the relationship between event and random variable

conditional probabilitydistributionsprobabilityrandom variable

I've been told that an event is just a random variable that has been assigned, and that random variables are a generalisation of events. However, I can't relate that to the definition of an event as a subset of the sample space.
Moreover, an event can either happen or not, whereas a random variable can have multiple outcomes.

Are events like binary random variables? If so, then is each outcome of a random variable really an event?

I also need to know how the two concepts relate to each other in terms of conditional independence.

Best Answer

Let the experiment be given by $ \DeclareMathOperator{\P}{\mathbb{P}} \DeclareMathOperator{\E}{\mathbb{E}} (\mathbb{X},\mathbb{B}, \P)$ where $\mathbb{X}$ is the sample space, $\mathbb{B}$ is the set of all events (subsets of $\mathbb{X}$ which we assign a probability) and $\P$ is the probability measure. Points of $\mathbb{X}$ are denoted $\omega$, and are the "elementary events" (or "outcomes"). Random variables on this experiment are functions $f \colon \mathbb{X}\mapsto \mathbb{R}$ and are written like $f(\omega)$, meaning that their value are determined by the elementary outcome $\omega$.

Corresponding to the event $A$ is the indicator random variable $$ I_A(\omega) = \begin{cases} 1 ~\text{if $A$ occurs, that is, $\omega\in A$.} \\ 0 ~\text{if $A$ do not occur, that is $\omega \not\in A$.} \end{cases} $$ In this sense, events can be embedded as a subset of the set of all random variables defined for this experimental setup. Then the probability of $A$ occurring can be written as an expectation $$ \P(A) = \E I_A. $$

To the additional question in comments: If $A$ and $B$ are independent (as events), then $I_A$ and $I_B$ are independent (as random variables). "can we say that I_A=1 and I_B=1 are independent?" Well, $I_A=1$ is simply the event $A$, so I think you can answer now!

Related Question