I know that there is no uniform distribution on natural set since there will be contradictions with the axioms if the probability theory. In particular, if we suggest that uniform distribution exists for natural set, if $P(n) > 0$ then sum of the probabilities of elementary outcomes will be equal to infinity. If $P(n) = 0$ then sum of the probabilities of elementary outcomes will be equal to $0$. Now to the real numbers. We can uniformly define probability of choosing at random some number belonging to $[0,1]$ as $P(x) = 0$. So the question is why the sum of elementary outcomes on $[0,1] = 0 + 0 + 0 … + 0 = 1$ while on natural set it equals to $0$?
Uniform distribution on $N$ and real line
probabilityprobability distributions
Related Solutions
You could do it by integration or argue out directly.
To argue out directly, all you need to recognize is that since the two samples are independent, $P(X_1>X_2) = P(X_1<X_2)$. And we have $P(X_1>X_2)+P(X_1=X_2)+P(X_1<X_2) = 1$ since all three are mutually exclusive. $P(X_1=X_2) = 0$ and hence we get $P(X_1>X_2) = P(X_1<X_2) = \frac{1}{2}$.
To do it by integration, first find the $P(X_1>X_2 | X_2=x)$. Since the distribution is uniform, $P(X_1>X_2 | X_2=x) = 1-x$. Now $P(X_1>X_2) = \displaystyle \int_{0}^1 P(X_1>X_2 | X_2=x) f_{X_2}(x) dx = \int_{0}^1 (1-x) \times 1 dx = \frac{1}{2}$
EDIT:
As Yuval points out this is true irrespective of the distribution.
The direct argument holds good irrespective of the distribution.
Also, leonbloy's argument based on areas still work out fine irrespective of the distribution.
As for the argument based on integration,
$P(X_1>X_2 | X_2=x) = 1-F_{X}(x)$.
Now,
$P(X_1>X_2) = \displaystyle \int_{ll}^{ul} P(X_1>X_2 | X_2=x) dF_{X_2}(x) = \int_{ll}^{ul} (1-F_X(x)) dF_X(x)$.
Hence,
$P(X_1>X_2) = \displaystyle \int_{ll}^{ul} (1-F_X(x)) dF_X(x) = \int_{ll}^{ul} dF_X(x) -\int_{ll}^{ul} d(\frac{F_X^2(x)}{2})$
$P(X_1>X_2) = F_X(ul) - F_X(ll) - \frac{F_X^2(ul) - F_X^2(ll)}{2} = 1 - 0 - \frac{1-0}{2} = \frac{1}{2}$
All these seemingly different arguments are fundamentally the same way of expressing the same idea but I thought it would be good to write it out explicitly.
I believe this is intended to be an elementary Bayesian inference problem. It seems you have decided to let $\theta = P(\text{Heads})$ have the "flat" or "noninformative" prior distribution $\mathsf{Unif}(0,1) \equiv \mathsf{Beta}(\alpha=1,\beta=1).$ So your prior distribution is $p(\theta) = 1.$
A uniform prior is not the only possible choice: (a) If you had some prior experience with or knowledge of the coin, you might choose a prior distribution that reflects your prior opinion. Perhaps you look at the coin, play with it a bit, and decide it seems close to fairly balanced. Then you might choose the prior distribution $\mathsf{Beta}(5,5)$ which puts roughly half of its probability in $(.4, .6)$ [based on a computation in R, where
qbeta(c(.25,.75), 5, 5)
returns 0.3919 and 0.6080.] (b) Another popular choice for a noninformative prior is $\mathsf{Beta}(.5, .5).$ [GoogleJeffrey's priors
.] Your choice of a prior distribution will influence your final conclusions after you have data and get a posterior distribution. (I think @Twis7ed's Comment is suggesting that priors other than uniform are possible.)
Then from your experiment, you have $n=20$ Bernoulli trials resulting in $x = 15$ Heads, so your likelihood function is $p(x|\theta) = \theta^x(1-\theta)^{n-x} = \theta^{15}(1-\theta)^5.$
Then according to Bayes' Theorem, the posterior distribution (using the uniform prior) is $$p(\theta|x) \propto p(\theta) \times p(x|\theta) \propto \theta^{15}(1-\theta)^5,$$
which we recognize as the kernel (PDF without constant) of $\mathsf{Beta}(16,6).$
The posterior distribution can be used to get a point estimate, which might be the posterior mean $\frac{16}{16+6} = 0.7272$, posterior median 0.7343 (from R), or posterior mode $\frac{15}{20} = 0.75.$
qbeta(.5, 16, 6)
[1] 0.7342603
You could also use the posterior distribution to find a Bayesian 95% probability interval $(0.53, 0.89)$ (from R).
qbeta(c(.025, .975), 16, 6)
[1] 0.5283402 0.8871906
Note: Maybe you can get the posterior distribution using the prior $\mathsf{Beta}(5, 5)$ and the same likelihood function, and see what difference that alternate choice of prior distribution makes in the point and interval estimates.
Best Answer
In your question, you wrote $0+0+0+\cdots+0=1$, which is not what you meant—you meant $0+0+0+\cdots=1$, which represents the reason that there is no uniform distribution on the countable set of natural numbers. However, $0+0+0+\cdots$ inherently denotes a countable sum of $0$s, not an uncountable sum of $0$s. The probability axioms (as Brian Tung commented) only require that countable sums of disjoint probabilities add to the right thing; they don't require that of uncountable sums of disjoint probabilities.
So the facts that $P\bigl( [0,1] \bigr)=1$, and $P\bigl( \{x\} \bigr) = 0$ for all $x\in[0,1]$, and $[0,1] = \bigcup_{x\in[0,1]} \{x\}$, are all true; but they don't contradict the probability axioms, because this last union is an uncountable union, not a countable union.