There is an important set-theoretic issue here: to be "categorical", a theory must have only one "model". If some reasonable candidate for second-order ZFC was categorical, its unique "model" would have to be the class of all sets. But then it would not have a set model, so it would actually be inconsistent in second-order semantics.
Put another way: there is a cardinal $\kappa$ such that if a countable theory in second-order logic has any model, then it has a model of size less than or equal to $\kappa$ (this is related to the Löwenheim number of second-order logic). But it would not make sense to call a set theory "second order ZFC" if its unique model had size less than $\kappa$, since we know there are sets larger than $\kappa$. And no matter what countable second-order theory we consider, we will never manage to exceed $\kappa$. (Surely any reasonable candidate for second-order ZFC would have at most a countable number of axioms.) So most of the set-theoretic universe would be omitted by any candidate for "categorical second-order ZFC".
Nevertheless, there are theories that are often called "second order ZFC". One such theory is just ZFC, but with the axiom scheme of replacement replaced by a single second-order axiom that quantifies over every class function $f$, and says that the image of any set under any class function is again a set. These theories are not categorical in second order logic, but at least they are consistent, and their models are much more nicely behaved than arbitrary models of first-order ZFC.
The questions assumes that there is some notion of "set" in first-order logic itself, but there is not. We use sets to study first-order logic, particularly the semantics (models) aspect. But these are part of the metatheory we use to study logic, not really part of "first order logic". For example, if we look at the first-order theory of groups, there is nothing in it about "sets".
If we look more at the syntactic (proofs) side, we can get by with a much weaker metatheory, one which only needs to manipulate strings. Theories often used for this purpose include Peano arithmetic and the weaker Primitive Recursive Arithmetic. In these theories, there aren't directly any "sets", just natural numbers, although these theories have ways to talk about functions from numbers to numbers and, as such, indirectly talk about some kinds of sets.
The really fundamental concepts in first-order logic are alphabet, signature, language, theory, formal proofs/derivability, and models/satisfiability. All but the last of these can be very satisfactorily studied using Peano arithmetic as our metatheory. Once we move to studying models - which are again a fundamental part of first-order logic - we usually find it more satisfactory to work in a stronger metatheory that is able to construct and work with models more directly.
On the nature of logic
The other thing about this particular question: it is common for people first studying mathematical logic to think that the main purpose of studying logic is to find the most primitive objects of mathematics and then to rebuild mathematics from these primitive objects -- this is the foundational aspect of logic.
That is indeed one aspect of mathematical logic, but not the only one by far. Historically, the foundational aspect was of particular interest around the turn of the 20th century, but it is not of such primary interest any longer. From the contemporary viewpoint, another purpose of mathematical logic is simply to understand mathematics better by using techniques that have come to be called "mathematical logic". I think that, for historical reasons and because it's interesting, the foundational aspect tends to be slightly over-emphasized in introductory materials.
For example, another common and important thread in mathematical logic is definability - the study of which aspects of mathematical structures can be expressed in which formal languages. This thread runs very heavily through computability theory and model theory, and is also found in set theory and proof theory.
Yet another common thread is an interest in the mathematical objects of logic for their own sake: some logicians study sets because they like sets, not as a way to study foundations. Some study computability because they like computability, without much interest in philosophical aspects. Some research topics in model theory are essentially indistinguishable from abstract algebra or analysis.
The foundational aspect of logic is still important, of course, and there are still people who work primarily on foundations. But the idea that mathematical logic will provide some sort of rock-solid foundation to all the rest of mathematics is not really part of the contemporary study of foundations. Instead we think about a range of theories, each suitable for its own foundational purpose. For studying the semantics of first-order logic, we need a theory that includes some way to handle models, which are particular kinds of sets.
As the shift from a mainly foundational viewpoint to a more broadly mathematical viewpoint occurred, several mathematical logic books from the mid 20th century included detailed explanations in the introduction about why they use advanced mathematical methods to study logic. One good treatment of this topic is in Monk's logic book, which can be found pretty cheaply these days.
The purpose of this section, which may be a slight digression, is to explain that one reason that it is not easy to see how logic is developed "out of nothing" from absolutely first principles is that, often, that isn't the goal that contemporary logicians have in discussing logic. They aren't necessarily trying to develop logic and mathematics from absolutely first principles.
Best Answer
It seems there are at least two ways to think about this.
First Way. Just because first-order logic is a language used to express the axioms of the natural number system, doesn't mean that the natural number system doesn't exist without first-order logic. First-order logic simply provides us with the syntax to express certain concepts about the natural numbers, but the properties of the natural numbers (such as induction) do not depend on the syntax they are expressed in.
This kind of thinking does not require any sort of Platonism. All you need is to accept the fact that concepts are not the same as the syntax used to express them. In this view, there is no problem or circularity of invoking another system (the natural number system) to prove things about the initial system (the system of first-order logic).
Second Way. Perhaps in spite of the point made above, you may insist that you really want to write out some kind of textbook that proceeds as linearly as possible. To you, every single thing should be constructed from previous things you used. In this case everything turns out to be perfectly okay as well, because you don't really need universal "for all" results about first-order logic to build $\mathsf{PA}_1$.
The two results I cited that rely on induction on wffs were the unique readability theorem and the theorem of recursive definitions on wffs. Both of these are simply not needed to describe $\mathsf{PA}_1$ and induction. When you do get to a point where you formalize induction, you can go back and begin proving things about the framework you were using that were true all along but didn't need them to be explicitly stated.