Seriously edited to address mistakes in original version; see the history of interested. I do think that all the claims I made originally are true, FWIW.
First, let's put the question as stated to bed. Consider the "standard model" $\mathcal{N}$ of your theory with first-order part $\mathbb{N}$ and second-order part $\mathcal{P}_{fin}(\mathbb{N})$ (and the usual interpretations of $<$ and $\in$). We trivially have set formation and well-foundedness here, since we've included everything we could want (set formation will pose a subtlety below, which tripped me up originally).
But $\mathcal{N}$ is decidable, that is, its full theory $Th(\mathcal{N})$ is computable (see here). No theory with a decidable model can interpret PA, since PA is essentially undecidable, so in particular your theory doesn't. Indeed we can push this vastly below PA: your theory doesn't even interpret Robinson arithmetic.
Now it seems that a fairly simple argument should lift this observation to a proof that in fact PA (and indeed a much weaker theory) proves the consistency of your theory. However, there are some subtleties here which I haven't yet nailed down comfortably, so I'll leave that for a later edit. The argument that I gave in a previous edit (that any model of $Th(\mathbb{N}; <)$ expands to a model of your theory once we add all bounded definable sets) would do this, but has a serious gap: comprehension in the expansion is not trivial, since we need to argue that when we add the ability to quantify over bounded definable subsets of our starting structure we don't generate any new bounded definable sets, and - while I think this is true - I don't have a complete argument for this yet.
Now let me address the more general question which arose in the comments and subsequent edit:
When can first-order logic support definition by recursion?
Certainly it can't in general, even along genuine well-orderings: consider $\mathcal{N}_P=(\mathbb{N};<,+)$. Clearly we can give a recursive definition of multiplication here, but $\mathcal{N}_P$ is decidable while $(\mathbb{N};<,+,\times)$ very much isn't, so multiplication can't actually be first-order definable in $\mathcal{N}_P$.
So at this point it's a good idea to look back at how definition by recursion is sometimes possible, e.g. in PA. What we do there is use finite sequences, and we do this via pairing (or something morally equivalent). That's an important dependency: coding sequences precedes definition by recursion.
Now finite sequences can be used to provide "derivations" of instances of recursive definitions. For instance, assuming we have addition "in hand," a derivation of "$a$ times $b$ equals $c$" would be a sequence of length $b$ whose first term is $a$, last term is $c$, and with difference $a$ between successive terms. Induction principles can then be invoked to prove that appropriate derivations exist; for instance, the totality of multiplication amounts to
For all $a,b$, there is a unique $c$ such that there is a derivation of "$a$ times $b$ equals $c$,"
and this is proved by induction, roughly as follows:
Fixing $a$, suppose $b$ is the minimal counterexample for the principle holding for $a$.
By induction (the base case being boring), let $d$ be the unique number such that there is a derivation of "$a$ times $b-1$ equals $d$," and let $\sigma$ be that derivation.
Appending $a+d$ to $\sigma$ gives a derivation of "$a$ times $b$ equals $a+d$" (so take $c=a+d$). To prove uniqueness, note that any derivation of "$a$ times $b$ equals $e$" (for any $e$) must have the form of $e$ appended to a derivation of "$a$ times $b-1$ equals $u$" for some $u$. But by induction that $u$ is $d$, so $e=u+a=d+a=c$.
But we can only even begin to do this after we've shown that all the relevant stuff about derivations is definable (so, again, our reasoning about sequences is preceding recursion).
One solution to this is to explicitly bake in the desired $\#$-operation. But we could also add definition to recursion to our underlying logic. The simplest approach to this yields least fixed point logic (LFP). The idea of LFP is to allow definitions which "build up in stages." Consider a formula $\varphi$ of a single variable $x$ in our language together with a new unary predicate symbol $U$. From $\varphi$ we get a map $m_\varphi$ on subsets of our structure $\mathcal{M}$: $$A\mapsto\{x:\mathcal{M}_A\models\varphi(x)\},$$ where $\mathcal{M}_A$ is the expansion of $\mathcal{M}$ gotten by interpreting $U$ as $A$.
Now if $U$ occurs only positively in $\varphi$, the map $m_\varphi$ is monotonic in that $A\subseteq B\implies m_\varphi(A)\subseteq m_\varphi(B)$. This means that there is a least fixed point of $m_\varphi$ - that is, a set $LFP_\varphi$ such that
$m_\varphi(LFP_\varphi)=LFP_\varphi$, and
whenever $m_\varphi(X)=X$ we have $X\supseteq LFP_\varphi$.
Essentially, we think of $\varphi$ as telling us how to go from a partial approximation to an object we're trying to build to a better approximation, and $LFP_\varphi$ is the set of all things we ever throw in. Least fixed point logic basically lets us build $LFP_\varphi$.
Of course, least fixed points don't always look how we might want them to. For instance, consider the linear order $\mathbb{N}+\mathbb{Z}$. There is a natural way to try to define the "even" elements by LFP: $$\varphi(x,U)\equiv [x=0]\vee [\exists y\in U(x=SSy)]$$ (where $S$ is the successor operation, which is of course definable). But $LFP_\varphi$ doesn't extend into the $\mathbb{Z}$-part at all: the least fixed point of $\varphi$ is the set of all standard even numbers only.
I don't know too much about least fixed point logic. It is a hot topic in computer science, but there the focus is especially on finite models. I believe its general model theory is well-studied but is quite complicated.
The standard notation in logic would be $\exists^\infty$.
The exclamation mark ! is used to indicate uniqueness, $\exists^{!n} x\,\phi(x)$ being "there are exactly $n$ distinct elements $x$ such that $\phi(x)$". So, the standard reading of $\exists^{!\infty}x\,\phi(x)$ would be "there are exactly infinitely many $x$ such that..." which is awkward, as it is not very exact to simply say "infinitely many", since there are many options here. And if you are in a setting where the universe of discourse is countably infinite, for instance, then the "exactly" part is superfluous anyway.
You are correct that $\exists^{\infty}x\,\phi(x)$ is the same as $\forall n\,\exists^{\ge n}x\,\phi(x)$. This is the case regardless of whether you are in a situation where the axiom of choice holds. I mention this because using choice, $\exists^{\infty}x\,\phi(x)$ is equivalent to $Q_{\aleph_0}x\,\phi(x)$, where $\aleph_0=|\mathbb N|$ and, for a cardinal $\kappa$, $Q_\kappa x\,\phi(x)$ means that there are at least $\kappa$ distinct values of $x$ such that ... (The $Q_\kappa$ are called cardinality quantifiers in the literature.) However, if choice fails, a set may be infinite without containing a countably infinite subset.
(And just to avoid confusion, let me add the remark mentioned in comments: Although each $\exists^{\ge n}$ is first-order definable as shown in the question, $\exists^\infty$ is a genuinely new quantifier, meaning that it is not first-order definable by a formula. Otherwise, its negation ("there are only finitely many") would also be first-order definable, and an easy compactness argument gives us a contradiction: the theory $$\{\lnot\exists^\infty x\,(x=x)\land\exists^{\ge n}x\,(x=x)\mid n\in\mathbb N\}$$ is inconsistent but any finite subset is consistent.)
Best Answer
Yes $V_\omega$ is, up to isomorphism, the only model of the theory you describe: If $\mathcal M$ is such a model then by induction on $n<\omega$ one can show that $\mathcal M$ has a member which is (isomorphic to) $V_n$, so there is a (unique) $\in^{\mathcal M}$-transitive substructure $\mathcal N$ of $\mathcal M$ which is isomorphic to $V_\omega$. If $\mathcal M$ would contain a member $x_0$ not in $\mathcal N$, then there is $x_1\in^{\mathcal M} x_0$ which is not in $\mathcal N$ either as otherwise $x_0$ would be a finite subset of $\mathcal N$, so an element of $\mathcal N$. But going on like this one can construct a descending sequence $x_0\ni^{\mathcal M} x_1\ni^{\mathcal M} x_2\ni^{\mathcal M}\dots$, contradiction foundation. So $\mathcal M=\mathcal N\cong V_\omega$.