The first order theory of the algebraic and order properties of the real numbers is the theory of real closed fields, and you will find various axiomatizations when you follow the link.
A structure with the first order properties of the real numbers may not satisfy the completeness axiom, which is not first order. For example, the field of hyperreal numbers has the same first order properties as the field of real numbers, but the set of finite numbers is nonempty and bounded above by any infinite number, yet has no supremum.
I think an important answer is still not present so I am going to type it. This is somewhat standard knowledge in the field of foundations but is not always adequately described in lower level texts.
When we formalize the syntax of formal systems, we often talk about the set of formulas. But this is just a way of speaking; there is no ontological commitment to "sets" as in ZFC. What is really going on is an "inductive definition". To understand this you have to temporarily forget about ZFC and just think about strings that are written on paper.
The inductive definition of a "propositional formula" might say that the set of formulas is the smallest class of strings such that:
Every variable letter is a formula (presumably we have already defined a set of variable letters).
If $A$ is a formula, so is $\lnot (A)$. Note: this is a string with 3 more symbols than $A$.
If $A$ and $B$ are formulas, so is $(A \land B)$. Note this adds 3 more symbols to the ones in $A$ and $B$.
This definition can certainly be read as a definition in ZFC. But it can also be read in a different way. The definition can be used to generate a completely effective procedure that a human can carry out to tell whether an arbitrary string is a formula (a proof along these lines, which constructs a parsing procedure and proves its validity, is in Enderton's logic textbook).
In this way, we can understand inductive definitions in a completely effective way without any recourse to set theory. When someone says "Let $A$ be a formula" they mean to consider the situation in which I have in front of me a string written on a piece of paper, which my parsing algorithm says is a correct formula. I can perform that algorithm without any knowledge of "sets" or ZFC.
Another important example is "formal proofs". Again, I can treat these simply as strings to be manipulated, and I have a parsing algorithm that can tell whether a given string is a formal proof. The various syntactic metatheorems of first-order logic are also effective. For example the deduction theorem gives a direct algorithm to convert one sort of proof into another sort of proof. The algorithmic nature of these metatheorems is not always emphasized in lower-level texts - but for example it is very important in contexts like automated theorem proving.
So if you examine a logic textbook, you will see that all the syntactic aspects of basic first order logic are given by inductive definitions, and the algorithms given to manipulate them are completely effective. Authors usually do not dwell on this, both because it is completely standard and because they do not want to overwhelm the reader at first. So the convention is to write definitions "as if" they are definitions in set theory, and allow the readers who know what's going on to read the definitions as formal inductive definitions instead. When read as inductive definitions, these definitions would make sense even to the fringe of mathematicians who don't think that any infinite sets exist but who are willing to study algorithms that manipulate individual finite strings.
Here are two more examples of the syntactic algorithms implicit in certain theorems:
Gödel's incompleteness theorem actually gives an effective algorithm that can convert any PA-proof of Con(PA) into a PA-proof of $0=1$. So, under the assumption there is no proof of the latter kind, there is no proof of the former kind.
The method of forcing in ZFC actually gives an effective algorithm that can turn any proof of $0=1$ from the assumptions of ZFC and the continuum hypothesis into a proof of $0=1$ from ZFC alone. Again, this gives a relative consistency result.
Results like the previous two bullets are often called "finitary relative consistency proofs". Here "finitary" should be read to mean "providing an effective algorithm to manipulate strings of symbols".
This viewpoint helps explain where weak theories of arithmetic such as PRA enter into the study of foundations. Suppose we want to ask "what axioms are required to prove that the algorithms we have constructed will do what they are supposed to do?". It turns out that very weak theories of arithmetic are able to prove that these symbolic manipulations work correctly. PRA is a particular theory of arithmetic that is on one hand very weak (from the point of view of stronger theories like PA or ZFC) but at the same time is able to prove that (formalized versions of) the syntactic algorithms work correctly, and which is often used for this purpose.
Best Answer
I think there are two (very interesting) questions here. Let me try to address them.
I would say the answer is definitely yes. We have to assume something to get off the ground; on some level, I at least take the natural numbers as granted.
(Note that there's a lot of wiggle room in exactly what this means: I know people who genuinely find it inconceivable that PA could be inconsistent, and I know people who find it very plausible, if not likely, that PA is inconsistent - all very smart people. But I think we do have to presuppose $\mathbb{N}$ to at least the extent that, say, Presburger arithmetic https://en.wikipedia.org/wiki/Presburger_arithmetic is consistent.)
Note that this isn't circular, as long as we're honest about the fact that we really are taking some things for granted. This shouldn't be too weird - if you really take nothing for granted, you can't get very much done https://en.wikipedia.org/wiki/What_the_Tortoise_Said_to_Achilles. In terms of foundations, note that we will still find it valuable to define the natural numbers inside our foundations; but this will be an "internal" expression of something we take for granted "externally." So, for instance, at times we'll want to distinguish between "the natural numbers" (as defined in ZFC) and "the natural numbers" (that we assume at the outset we "have" in some way).
My answer is a resounding: Sort of! :P
On the one hand, I'm inherently worried about natural language. I don't trust my own judgment about what is "clear" and "precise." For instance, is "This statement is false" clear and precise? What about the Continuum Hypothesis?
For me, one of the things first-order logic does is pin down a class of expressions which I'm guaranteed are clear and precise. Maybe there's more of them (although I would argue there aren't any, in a certain sense; see Lindstrom's Theorem https://en.wikipedia.org/wiki/Lindstr%C3%B6m%27s_theorem), but at the very least anything I can express in first-order logic is clear and precise. There are a number of properties FOL has which make me comfortable saying this; I can go into more detail if that would be helpful.
So for me, FOL really is a proxy for clear and precise mathematical thought. There's a huge caveat here, though, which is that context matters. Consider the statement "$G$ is torsion" (here $G$ is a group). In the language of set theory with a parameter for $G$, this is first-order; but in the language of groups, there is no first-order sentence $\varphi$ such that [$G\models\varphi$ iff $G$ is torsion] for all groups $G$! This is a consequence of the Compactness Theorem for FOL.
So you have to be careful when asserting that something is first-order, if you're working in a domain that's "too small" (in some sense, set theory is "large enough," and an individual group isn't). But so long as you are careful about whether what you are saying is really expressible in FOL, I think this is what everyone does to a certain degree, or in a certain way.
At least, it's what I do.