[Math] the difference between event and outcome in probability theory

probability theoryreal-analysis

Mathematically, we have a probability space (X, Omega , P), and we say X is sample space, which contains all the "outcomes", and Omega is an sigma-algebra on X, containing all the "events". And last, P is a probability measure.
I can understand the difference between "outcome" and "event", since basically, an outcome is an "element", and "event" is a "set". But in application, what is the fundamental difference between an outcome and an event? Is it that we just define them without any specific reason, or there are some criteria?
Another concern is, by definition, it seems that we will almost in every case define the probability of any outcome(an event that contains only one certain outcome) to be the same number(we consider the discrete case here). Like, if there are 6 outcomes in total, we tend to define every outcome to have probability 1/6, though not necessarily mathematically I think. But intuitively, how we can say all outcomes have the same probability?
For example, for a fair 4-face dice, we can define outcomes like this:{{1, 2}, 3, 4}. Intuitively, {1, 2} cannot be an outcome because we can further divide it into two outcomes {1} and {2}. But in real application problem, it is not always that intuitive, I think. Sometimes you do not know whether "an outcome" has been divided as possible as we can.

Best Answer

In a real application, you want to make sure that an outcome includes all aspects of the result of the experiment that you might be interested in, and that every possible result of the experiment is covered. Beyond that, you have a lot of freedom. You usually do not want to clutter things up by including details that don't matter to you, unless having them there makes it easier to calculate probabilities. Thus for the die, even though the difference between 1 and 2 might not make a difference to you, you will count them as separate outcomes to make counting easier using symmetry.

A less artificial example is when you roll several fair dice. You want to distinguish between outcomes "1 on the first die and 2 on the second" and "2 on the first die and 1 on the second", even if you don't care which is which, because this makes outcomes equally likely.

EDIT: Maybe I should also say something about $\sigma$-fields that do not contain singletons. At first this may seem rather artificial. But such things turn out to be quite useful when considering experiments that take place in stages.

For example, let's say your experiment consists of an infinite sequence of coin-tosses. The outcomes consist of all infinite sequences of Heads and Tails. But at a particular time $t$, you only know the results of the first $t$ tosses. You thus might define a $\sigma$-field $\mathcal F_t$ for the events that depend only on those first $t$ tosses. An example of a member of $\mathcal F_3$ would be the set of outcomes with Heads on the first toss and Tails on the second.

Related Question