[Math] Defining a probability on the natural numbers set

elementary-number-theorymeasure-theoryprobability

The standard definition of a probability on a set A is as a sigma-additive function from a sigma-algebra constructed on the power set of A to the extended real positive line (also called a measure), such that the measure of the whole set A is 1.

If A is the set of real numbers, such a definition cannot be made in such a way that any set consisting of only one natural number has equal probability, for, if mu({n}) = c > 0 for any n natural (where mu is the measure), then any infinite set would have infinite measure, and, in particular, mu(A) != 1.

But it is trully natural to think, for example, about the probability of certain events relating to the choice of a random natural number, such as the probability of getting an even number out of a random choice of a natural number, which would be 1/2. This kind of probability should satisfy the above condition of having odds 0 of getting any particular natural number.

I searched the forum and saw that many people work with probability on the natural numbers by relaxing the assumption of the probability being sigma-additive to just being additive. This seems to solve the problem, although I still didn't find an algebra and a probability function which would seem satisfying (but I guess it is not so hard).

The question, then, is what exactly one misses when relaxing the assumption of sigma-additivity. Does a definition of probability in the subsets of natural numbers via a finitely additive set function lead to inconsistencies? What theorems about probability stop applying under such a definition?

Also, am I being to restrictive in my general definition of probability, and is it acceptable in modern mathematical culture to define probabilities via only finitely aditive functions?

Best Answer

I still didn't find an algebra and a probability function which would seem satisfying.

It is tricky to describe this finitely-additive measure explicitly. The naive approach would be to call $\mathcal{A}$ the class of subsets $A$ of $\mathbb N$ for which the "natural density" $$P(A) = \lim_n \frac{| A \cap \{1,\dots,n\}|}n$$ exists. Very nice, $\mathcal A$ is closed under complements, $P(\mathbb N)=1$, $P(\emptyset)=0$, $P($evens$)=P($odds$)=\frac12$, and in general $P(A^c)=1-P(A)$ for $A\in \mathcal A$.

However, $\mathcal A$ not closed under intersections (here is a counter-example), so it is not an algebra.

In practice, people consider finitely-additive extensions of this $P$ to the entire class $\mathcal P(\mathbb N)$ of all subsets of $\mathbb N$. The extensions are usually non-explicit and use the strongest version of Hahn-Banach, which in turn uses the Axiom of Choice. A few references: 1 2 3

What exactly does one miss when relaxing the assumption of sigma-additivity? What theorems about probability stop applying under such a definition?

You miss everything related to limits, convergence, and infinite sums.

For example, let $A_k$ be the set of all even numbers greater than $k$ and all multiples of $4$. This is a decreasing sequence of sets, and as $k\to\infty$ it converges to the set $A$ of multiples of $4$, but $P(A_k) \not\to P(A)$ since $P(A_k)=\frac12$ for every $k$ and $P(A)=\frac14$.

Does a definition of probability in the subsets of natural numbers via a finitely additive set function lead to inconsistencies?

No. It leads to a theory that is less powerful than measure theory, but not inconsistent.

Is it acceptable in modern mathematical culture to define probabilities via only finitely aditive functions?

Well, we would love to have a $\sigma$-additive probability measure $P$ on some $\sigma$-field $\mathcal{F}$ that would coincide with the natural density for all sets $A$ that actually do have a natural density, but that's just impossible. People may argue that it will end up being less interesting since it will miss most of the theory, but if you have a good reason to study "uniform" probabilities on $\mathbb{N}$, that's all you got.