It really depends on what you mean by the "probability of randomly selecting n natural numbers with property $P$". While you cannot pick random natural number, you can speak of uniform distribution.
For the last problem, the probability is calculated, and is to be understood as the limit when $N \to \infty$ from the "probability of randomly selecting n natural numbers from $1$ to $N$, all pairwise coprime".
Note that in this sense, the second problem also has an answer. And some of this type of probabilities can be connected via dynamical systems to an ergodic measure and an ergodic theorem.
Added The example provided by James Fennell is good to understand the last paragraph above.
Consider ${\mathbb Z}_2 = {\mathbb Z}/2{\mathbb Z}$, and the action of ${\mathbb Z}$ on ${\mathbb Z}_2$ defined by
$$m+ ( n \mod 2)=(n+m) \mod 2$$
Then, there exists an unique ergodic measure on ${\mathbb Z}_2$, namely $P(0 \mod 2)= P(1 \mod 2)= \frac{1}{2}$.
This is really what we intuitively understand by "half of the integers are even".
Now, the ergodic theory yields (and is something which can be easily proven directly in this case)
$$\lim_{N} \frac{\text{amount of even natural numbers} \leq N}{N} = P( 0 \mod 2) =\frac{1}{2} \,.$$
I'd like to give a semi-philosophical, semi-mathematical defense of the answer 1/2.
The received view in contemporary mathematical probability theory, due to Kolmogorov, is that probability functions are normalized measures: nonnegative, countably additive set functions that are defined on $\sigma$-fields and that assign the sure event measure $1$. This hasn't always been the received view, however.
Bruno DeFinetti argued, famously, that the concept of probability does not mandate countable additivity. His argument is based on the fact, pointed out by you and commenters, that countable additivity rules out the possibility of a uniform distribution on the natural numbers. DeFinetti, a subjectivist, thought there was nothing rationally incoherent about distributing one's credences uniformly over $\mathbb{N}$, and, on these grounds, argued that the countable additivity axiom be weakened. For DeFinetti, probabilities need only be finitely additive.
Working in the tradition of finitely additive probability, it is indeed possible to define a uniform distribution on $\mathbb{N}$. The needed extension results can be found in Rao and Rao's Theory of Charges and Hrbacek and Jech's Introduction to Set Theory. I will sketch a construction here so that this answer has some mathematical content (if needed or requested, I will edit later to include more details).
First, we define the natural density, as in comments above, to be the limit (if it exists) $$ \lim_{n \to \infty} \frac{|A \cap \{1,...,n \}|}{n} = :d(A) $$ for $A \subseteq \mathbb{N}$.
It is not difficult to show
Proposition. (i) $d(\emptyset) = 0$ and $d(\mathbb{N})=1$. (ii) $d(\{n\})=0$ for all $n \in \mathbb{N}$. (iii) The set of numbers divisible by $m$ has natural density $1/m$. (NB: Some of this has been asserted in comments above, but I include it here for completeness.)
We then have the following theorem.
Theorem. There exists a finitely additive probability measure $P$ on $\mathscr{P}(\mathbb{N})$ that extends the natural density $d$.
Proof sketch. The proof relies on the existence of the Frechet ultrafilter $\mathcal{U}$ (the ultrafilter of cofinite sets) on $\mathbb{N}$, and is therefore non-constructive. We say a sequence $(x_{n})$ of real numbers is convergent in $\mathcal{U}$ with limit $x$ and write $x = \lim_{\mathcal{U}}x_{n}$ if for all $\epsilon > 0$
$$\{n: |x_{n} - x| < \epsilon \} \in \mathcal{U}.$$
It can be shown that every real sequence has a unique $\mathcal{U}$-limit. It can also be shown that
$$P(A) = \lim_{\mathcal{U}}\frac{|A \cap \{1,...,n \}|}{n}$$
is a finitely additive probability on $\mathscr{P}(\mathbb{N})$ extending $d$. $\square$
By the Theorem and the Proposition, $P$ is a finitely additive probability measure that assigns measure $1/2$ to the set of odd numbers.
Now, you may object that I've changed the subject by invoking merely finitely additive "probabilities". In response, I would remind you that the concept of probability predates the Kolmogorovian theory, and that there are good arguments (due to DeFinetti and other subjectivists) in favor of relaxing countable additivity in some contexts.
Best Answer
I think the correct sum is $$ P(1)+P(3)+P(5)+\cdots=\sum_{k=0}^\infty 2\left(\frac{1}{3}\right)^{2k+1}=\frac{2}{3} \sum_{k=0}^\infty\left(\frac{1}{9}\right)^k=\frac{3}{4}~~,$$ because in that way you sum in the odd numbers only.
It is not useful to think you are selecting a number "at random" from $S$ because you have the probabilities explicitly: they are telling you that the way numbers are selected at random from $S$ is such that you get a "1" in two thirds of the time (on average) and a "2" in two out of nine cases, etc...