I think there are two (very interesting) questions here. Let me try to address them.
First, the title question: do we presuppose natural numbers in first-order logic?
I would say the answer is definitely yes. We have to assume something to get off the ground; on some level, I at least take the natural numbers as granted.
(Note that there's a lot of wiggle room in exactly what this means: I know people who genuinely find it inconceivable that PA could be inconsistent, and I know people who find it very plausible, if not likely, that PA is inconsistent - all very smart people. But I think we do have to presuppose $\mathbb{N}$ to at least the extent that, say, Presburger arithmetic https://en.wikipedia.org/wiki/Presburger_arithmetic is consistent.)
Note that this isn't circular, as long as we're honest about the fact that we really are taking some things for granted. This shouldn't be too weird - if you really take nothing for granted, you can't get very much done https://en.wikipedia.org/wiki/What_the_Tortoise_Said_to_Achilles. In terms of foundations, note that we will still find it valuable to define the natural numbers inside our foundations; but this will be an "internal" expression of something we take for granted "externally." So, for instance, at times we'll want to distinguish between "the natural numbers" (as defined in ZFC) and "the natural numbers" (that we assume at the outset we "have" in some way).
Second question: Is it okay to view first-order logic as a kind of "proxy" for clear, precise natural language?
My answer is a resounding: Sort of! :P
On the one hand, I'm inherently worried about natural language. I don't trust my own judgment about what is "clear" and "precise." For instance, is "This statement is false" clear and precise? What about the Continuum Hypothesis?
For me, one of the things first-order logic does is pin down a class of expressions which I'm guaranteed are clear and precise. Maybe there's more of them (although I would argue there aren't any, in a certain sense; see Lindstrom's Theorem https://en.wikipedia.org/wiki/Lindstr%C3%B6m%27s_theorem), but at the very least anything I can express in first-order logic is clear and precise. There are a number of properties FOL has which make me comfortable saying this; I can go into more detail if that would be helpful.
So for me, FOL really is a proxy for clear and precise mathematical thought. There's a huge caveat here, though, which is that context matters. Consider the statement "$G$ is torsion" (here $G$ is a group). In the language of set theory with a parameter for $G$, this is first-order; but in the language of groups, there is no first-order sentence $\varphi$ such that [$G\models\varphi$ iff $G$ is torsion] for all groups $G$! This is a consequence of the Compactness Theorem for FOL.
So you have to be careful when asserting that something is first-order, if you're working in a domain that's "too small" (in some sense, set theory is "large enough," and an individual group isn't). But so long as you are careful about whether what you are saying is really expressible in FOL, I think this is what everyone does to a certain degree, or in a certain way.
At least, it's what I do.
Yes, there are lots of these - and the relevant notion is the arithmetic hierarchy.
There's a potential point of confusion worth addressing here, especially since it appeared in a now-deleted answer: we cannot conflate "true in $\mathbb{N}$" with "provable in $\mathsf{PA}$." In particular, for each formula $\varphi$ the set $$\{x: \mathsf{PA}\vdash\varphi(x)\}$$ is indeed r.e., but that does not mean that the set $$\{x: \mathbb{N}\models\varphi(x)\}$$ need be anywhere close to r.e.
Here's a brief summary. Every formula $\psi$ in the language of arithmetic is ($\mathsf{PA}$-provably) equivalent to one of the form $$Q_1x_1Q_2x_2....Q_nx_n\varphi(x_1,...,x_n)$$ where each $Q_i$ is a quantifier (either $\forall$ or $\exists$) and $\varphi$ uses only bounded quantifiers (quantifiers of the form $\forall y<n$ and $\exists y<n$). We can get an upper bound on the complexity of the set defined by $\psi$ by looking at:
the outermost quantifier $Q_1$, and
the number of quantifier alternations ("$\forall\exists$" or "$\exists\forall$" - this can be much less than the total number of quantifiers).
A formula of the above type whose outer quantifier is $\exists$ and which has $i$-many quantifier alternations is called $\Sigma_{i+1}$; a formula whose outer quantifier is $\forall$ and which has $i$-many quantifier alternations is called $\Pi_{i+1}$.
The first part of the connection between the arithmetic hierarchy and computational complexity is the following:
A set is r.e. iff it is definable by a $\Sigma_1$ formula.
See here. More generally, there is a connection between the arithmetical hierarchy on the one hand and Turing reducibility and the Turing jump and :
For each $n\in\mathbb{N}$ we have $X\le_T{\bf 0^{(n)}}$ iff $X$ is definable by a $\Sigma_{n+1}$ formula and $X$ is definable by a $\Pi_{n+1}$ formula.
This is due to Post. You may also find Shoenfield's limit lemma to be of interest. The simplest natural example of an arithmetic set which is not r.e., or indeed Turing-equivalent to any r.e. set (to rule out things like "the complement of the halting problem") is in my opinion the set of indices for Turing machines which halt on all inputs. This set, which is often denoted "$\mathsf{Tot}$" (for "total"), has Turing degree ${\bf 0''}$ and is $\Pi_2$ but not $\Sigma_2$.
(We say that a set is $\Sigma_n$ if it is definable by a $\Sigma_n$ formula, and similarly for $\Pi_n$; additionally, we say that a set is $\Delta_n$ iff it is both $\Sigma_n$ and $\Pi_n$. Note that there is no such thing as a "$\Delta_n$ formula" - whereas $\Sigma_n$-ness and $\Pi_n$-ness are syntactic properties, $\Delta_n$-ness is genuinely "semantic.")
Another important class of examples comes from the notion of bounded truth predicates. Per Tarski, the set $$\{\ulcorner\psi\urcorner: \mathbb{N}\models\psi\}$$ is not arithmetical (here "$\ulcorner\cdot\urcorner$" is your favorite Godel numbering function). However, for each $n$ the set of Godel numbers of true $\Sigma_n$ sentences is indeed $\Sigma_n$. The set of Godel numbers for false $\Sigma_n$ sentences (or Turing-equivalently the set of Godel numbers of true $\Pi_n$ sentences) is therefore $\Pi_n$ but not $\Sigma_n$. Similarly, the set of Godel numbers of false $\Pi_n$ sentences (or Turing-equivalently the set of Godel numbers of true $\Sigma_n$ sentences) is $\Sigma_n$ but not $\Pi_n$. Now just pick some really big $n$ (that is, $n>1$).
Best Answer
The question is somewhat ambiguous about exactly how the sets would be enumerated. The natural way to read it in computability theory is the following, which is the usual sense in which a sequence of sets can be r.e.:
The answer to that is certainly "no", because if it was true then every definable set would be r.e., which is not the case.