I highly recommend reading Martin-Löf's paper referenced by Ulrik Buchholtz in the comments to your question. Apart from that, here are a couple of point that might help, some of which were already made by Andreas Blass in his answer.
A judgement is an act of knowing, or asserting a piece of knowledge about a mathematical object. For instance, a non-traditional single-sorted first-order logic could have the following kinds of judgments:
- $t\ \mathsf{term}$, intended meaning "$t$ is a well-formed term"
- $P\ \mathsf{prop}$, intended meaning "$P$ is a well-formed formula"
- $\Gamma\ \mathsf{hyps}$, indented meaning "$\Gamma$ is a well-formed list of hypotheses"
- $\Gamma \vdash P\ \mathsf{holds}$, intended meaning "formula $P$ is derivable under hypotheses $\Gamma$".
You do not normally see all of these because for first-order logic the first three are very simple and they get relegated to an informal explanation of syntax. But notice that there is a difference between the syntactic object "$P \land Q$" and the assertion "$P \land Q$ is a well-formed formula" (a judgment). If you doubt that there is a difference, consider the syntactic object "$P \land (Q \lor R$". We can speak meaningfully about it ("It's missing a closing parenthesis", "It contains two connectives."), but we do not assert "$P \land (Q \lor R$ is a well-formed formula".
Let me emphasize the difference between the sequent
$$\Gamma \vdash P$$
and the judgement
$$\Gamma \vdash P\ \mathsf{holds}.$$
The sequent is a syntactic object. The judgement is an assertion about the syntactic object. A lot of texts use the same notation for both, and that's where some of the confusion may be coming from. (Also note that many texsts write "$\mathsf{true}$" instead of "$\mathsf{holds}$", but I want to emphasize here that I am not talking about semantic truth, but ratehr about derivability.)
Rules of inference explain how we can build evidenece of assertions. The "syntactic" assertions, such as as "$P$ is a well-formed formula" or "variable $x$ does not occur in $P$" can be explained in terms of inference rules, but you won't see them in logic textbook, because they prefer to explain syntax in an informal way, e.g., "If $P$ is a wff and $Q$ is a wff then $P \land Q$ is a wff." If we want to, we can write down the corresponding formal rule:
$$\frac{P\ \mathsf{prop} \qquad Q\ \mathsf{prop}}{(P \land Q)\ \mathsf{prop}}.$$
This says: "If $P$ is a well-formed formula and $Q$ is a well-formed formula, then $P \land Q$ is a well-formed formula." It is different from the rule
$$\frac{\Gamma \vdash P\ \mathsf{holds} \qquad \Gamma \vdash Q\ \mathsf{holds}}{\Gamma \vdash P \land Q\ \mathsf{holds}}$$
which says: "If $P$ is derived from hypotheses $\Gamma$ and $Q$ is derived from hypotheses $\Gamma$, then $P \land Q$ can also be derived from hypotheses $\Gamma$."
To see why it might make sense to have formal rules for establishing that something is a wff, consider the rule:
$$\frac{ }{\Gamma \vdash t = t\ \mathsf{holds}}$$
It does not say that $t$ must be a well-formed term. It would be more precise to say:
$$\frac{\Gamma\ \mathsf{hyps} \qquad t\ \mathsf{term}}{\Gamma \vdash t = t\ \mathsf{holds}}$$
Now the rule is explicit: reflexivity holds only under well-formed hypotheses, and only for well-formed terms. Of course, we need to give rules for $t\ \mathsf{term}$ and $\Gamma\ \mathsf{hyps}$. They would be something like
$$
\frac{ }{()\ \mathsf{hyps}}
\qquad\qquad
\frac{P\ \mathsf{prop} \qquad \Gamma\ \mathsf{hyps}}{(P, \Gamma)\ \mathsf{hyps}}
$$
and (assuming that the language has variables $x_k$, a constant $1$ and a binary operation $+$):
$$
\frac{k \in \mathbb{N}}{x_k\ \mathsf{term}}
\qquad\qquad
\frac{ }{1 \ \mathsf{term}}
\qquad\qquad
\frac{t\ \mathsf{term}\qquad u\ \mathsf{term}}{(t + u)\ \mathsf{term}}
$$
I am fudging details about meta-level natural numbers that are used for indexing variable names. You can devise your own solution (for instance use unary representation of numbers, and give inference rules for such a representation).
It is perhaps silly to follow such rigorous formal discipline for first-order logic, but there are formal systems where the discipline is necessary. Dependent type theory is an example, as well as formal systems describing programming languages. These can grow quite complex, and since they also get implemented in practice, it is quite helpful to have all details written down formally.
Given that your proof system includes ZFC, the standard way to handle algebraic structure like this in a set-theoretic context is simply to interpret those concepts in set theory. In the language of set theory we can interpret the notions of signature, model, language, and it is simply a ZFC theorem that every model $\mathcal{M}$ in a given signature admits a satisfaction relation $\models$, as defined and proved by Tarski. And part of what that means is that $\mathcal{M}\models\forall x\,\phi$ if and only if for every $y$ in the domain of $\mathcal{M}$ we have $\mathcal{M}\models\phi[x:=y]$. That is, your biconditionals are just part of the Tarski recursion, defining what it means to have a satisfaction relation $\models$.
From this perspective, we don't need to add any new inference rules to the proof system—we are just proving things in ZFC.
However, a slightly revised and more formally correct perspective is that whenever we introduce defined terms to a theory, such as defining the concepts of ordinal, functions, sequences, models, signature, satisfaction $\models$ and so on in ZFC, then we have introduced a definitional expansion of the theory to include these defined terms. We can treat this formally by augmenting the formal system with the definitions of those defined terms. If one does this, then the right treatment of $\models$ would not be just what you wrote, but rather the full Tarski recursive definition of satisfaction.
Best Answer
For geometric theories in a countable fragment, (axiomatized by countably many sequents where the disjunctions are also countable), there exists a sound and complete system that appears already in Makkai and Reyes "First-order categorical logic". In page 159, it is described in the section "Completeness of a one-sided system for coherent logic". One should be aware that what Makkai and Reyes called there "coherent" is what we call now "geometric". Also, although they use a Gentzen style sequent calculus, their system is easily seen to be equivalent to the system of geometric logic through sequents as described in the question, which appears, e.g. in Johnstone's "Sketches of an Elephant" section D1.3
If one removes the countability condition, then the system is no longer complete for usual set models but there is still a completeness theorem in terms of models in toposes (and, in fact, in terms of Boolean-valued models). In the same book Makkai and Reyes also treat these cases.
If one insists in having a complete system in terms of set models even when the theory has $\kappa$ sequents for some uncountable $\kappa$, the workaround is to relax the notion of geometric formula to that of a $\kappa$-geometric formula, in which conjunctions are not necessarily finitary, but of size less than $\kappa$. These are preserved by the $\kappa$-geometric morphisms, which are those geometric morphisms whose inverse image additionally preserves all $\kappa$-small limits. The deductive system in this case features a new infinitary rule and is described in my paper "Infinitary generalizations of Deligne's completeness theorem", The Journal of Symbolic Logic, volume 85, Issue 3, pp. 1147 - 1162.