First of all, there's no reason to restrict the definition of finite additivity to finite spaces. It's just that if a finitely additive measure is defined on a finite space, then, trivially, it's countably additive.
We can extend finite additivity as you've defined it by induction. Your axiom implies that, for any $n \in \mathbb{N}$, if $A_1,...,A_n$ is a sequence of pairwise disjoint events, then $$\sum_nP(A_n) = P(\cup_n A_n).$$
But finite additivity does not imply countable additivity. Indeed, consider the "fair integer lottery" (the uniform distribution over integers) that was of interest to De Finetti. This is a finitely additive probability measure defined on all subsets of $\mathbb{Z}$ such that $P(\{ z\})=0$ for all $z \in \mathbb{Z}$. Countable additivity fails because
$$P(\mathbb{Z}) = 1 \neq 0 = \sum_z P(\{z \}).$$
Note that this is analogous to Lebesgue measure on $[0,1]$, for which we have countable additivity but not uncountable additivity.
Now, it's a little bit difficult to show rigorously that the measure described above actually exists. A relatively easy way to do it uses an ultrafilter. The existence of the required ultrafilter is usually established by using the axiom of choice or its equivalent, Zorn's lemma. (I wrote a little bit about this here.)
In fact, any ultrafilter $\mathcal{U}$ defines a finitely additive measure by setting $P(U)=1$ if $U \in \mathcal{U}$ and $P(U) = 0$ if $U \notin \mathcal{U}$. See this for example.
On the other hand, it has been shown that any purely finitely additive measure is non-constructible: the existence of such a measure cannot be proved with the ZF axioms of set theory alone. See this paper.
Let $B_n = A_n \setminus A_{n+1}$ and $C_n = A_1 \setminus A_n = \bigcup_{k=1}^{n-1} B_n$ for all $n \in \mathbb N$.
Clearly the sets $B_1, B_2, B_3 …$ are all mutually disjoint while $C_1 \subseteq C_2 \subseteq C_3 \subseteq …$ form an ascending chain, with $\bigcup_{n=1}^\infty B_n = \bigcup_{n=1}^\infty C_n = A_1$. Further, since $C_n = A_1 \setminus A_n$, we have $P(C_n) = P(A_1) - P(A_n)$.
We wish to show that $$\lim_{n\to\infty} P(A_n) = \lim_{n\to\infty} P(A_1) - P(C_n) = 0.$$
To do that, it suffices to observe that
$$
\lim_{n\to\infty} P(C_n)
= \lim_{n\to\infty} \sum_{k=1}^{n-1} P(B_n)
= \sum_{n=1}^\infty P(B_n)
= \sum_{n=1}^\infty \sum_{\omega \in B_n} p(\omega)
= \sum_{\omega \in \bigcup_{n=1}^\infty B_n} p(\omega)
= \sum_{\omega \in A_1} p(\omega)
= P(A_1).
$$
Basically, we're splitting the initial event $A_1$ into a disjoint union of events $B_1, B_2, B_3, …$, where the event $B_n$ contains exactly those outcomes $\omega \in \Omega$ that are removed from the descending chain of events $A_1 \supseteq A_2 \supseteq A_3 \supseteq …$ at the $n$-th step. We then observe that first adding up the probabilities of the outcomes in $B_1$, then those in $B_2$, etc. is equivalent to adding up all the outcomes in $A_1 = B_1 \cup B_2 \cup B_3 \cup …$; either way, we end up counting each outcome exactly once. Thus, conversely, as we first remove the outcomes in $B_1$ from the sum, then those in $B_2$, etc., we'll eventually end up removing every outcome from the sum, and are thus left with a limit probability of zero.
Best Answer
Good question - no the probability is not unique. It is generally a measure defined on the sample space $\Omega$, which must be equipped with a sigma algebra $\mathcal{F}$. The pair $(\Omega, \mathcal{F})$ is called a measurable space (it the sense that it admits a measure - here, a probability measure).
However, when we ask what is the probability that $X \in A$ (e.g., $P[X\leq 1]$), what we actually measure is the subset of $\Omega$, $X^{-1}(A)$.
Rewind: Let's take it form the beginning: $(\Omega, \mathcal{F})$ is a measurable space. We equip it with some probability measure $P:\mathcal{F} \to [0,1]$ (which satisfies the axioms you mentioned). Let $(E, \mathcal{E})$ be another measurable space - take for example $\mathbb{R}$ with its Borel subsets.
A function $X:\Omega \to E$ is called a random variable if it is measurable, that is, if $X^{-1}(C) \in \mathcal{F}$ whenever $C\in \mathcal{E}$.
The "probability that $X$ is in $A$", denoted by $P[X\in A]$, is a shorthand for
$$ P[X^{-1}(A)] = P[\{\omega \in \Omega : X(\omega) \in A\}]. $$
This explains why we required that $X$ should be measurable: if $X$ is measurable, $\{\omega \in \Omega : X(\omega) \in A\}$ is measurable whenever $A$ is measurable.
The value of this probability depends on the choice of $P$ on $(\Omega, \mathcal{F})$. This is certainly something one needs to clarify. Sometimes probability measures are defined in terms of probability distributions, that is, functions $F_X(x) = P[X\leq x]$. This definition does not allow us to measure the sets of $\mathcal{F}$ directly (but we usually don't care), but permits to probe into $\Omega$ using $X$.
I hope this answers your question to some extent.