Addressing your issue with Godel: note that even the one-use $\omega$-rule is not computable. There's no obstacle to a non-recursive theory being complete.
Actually this is a bit more subtle than I'm giving it credit for, but the point still holds. Using your restricted $\omega$-rule, we do have finitary-ish proof objects: e.g. a proof of $PA+\omega_{PA}\vdash \forall x\varphi(x)$ is a Turing machine $\Phi_e$ such that for all $n$, $\Phi_e(n)$ is a $PA$-proof of $\varphi(n)$. In this sense, a proof is a finite object. However, telling whether $\Phi_e$ has this property is not computable! That is, proof-verification is non-effective.
Note that things get even worse once we allow more $\omega$-ruliness . . .
I highly recommend reading Martin-Löf's paper referenced by Ulrik Buchholtz in the comments to your question. Apart from that, here are a couple of point that might help, some of which were already made by Andreas Blass in his answer.
A judgement is an act of knowing, or asserting a piece of knowledge about a mathematical object. For instance, a non-traditional single-sorted first-order logic could have the following kinds of judgments:
- $t\ \mathsf{term}$, intended meaning "$t$ is a well-formed term"
- $P\ \mathsf{prop}$, intended meaning "$P$ is a well-formed formula"
- $\Gamma\ \mathsf{hyps}$, indented meaning "$\Gamma$ is a well-formed list of hypotheses"
- $\Gamma \vdash P\ \mathsf{holds}$, intended meaning "formula $P$ is derivable under hypotheses $\Gamma$".
You do not normally see all of these because for first-order logic the first three are very simple and they get relegated to an informal explanation of syntax. But notice that there is a difference between the syntactic object "$P \land Q$" and the assertion "$P \land Q$ is a well-formed formula" (a judgment). If you doubt that there is a difference, consider the syntactic object "$P \land (Q \lor R$". We can speak meaningfully about it ("It's missing a closing parenthesis", "It contains two connectives."), but we do not assert "$P \land (Q \lor R$ is a well-formed formula".
Let me emphasize the difference between the sequent
$$\Gamma \vdash P$$
and the judgement
$$\Gamma \vdash P\ \mathsf{holds}.$$
The sequent is a syntactic object. The judgement is an assertion about the syntactic object. A lot of texts use the same notation for both, and that's where some of the confusion may be coming from. (Also note that many texsts write "$\mathsf{true}$" instead of "$\mathsf{holds}$", but I want to emphasize here that I am not talking about semantic truth, but ratehr about derivability.)
Rules of inference explain how we can build evidenece of assertions. The "syntactic" assertions, such as as "$P$ is a well-formed formula" or "variable $x$ does not occur in $P$" can be explained in terms of inference rules, but you won't see them in logic textbook, because they prefer to explain syntax in an informal way, e.g., "If $P$ is a wff and $Q$ is a wff then $P \land Q$ is a wff." If we want to, we can write down the corresponding formal rule:
$$\frac{P\ \mathsf{prop} \qquad Q\ \mathsf{prop}}{(P \land Q)\ \mathsf{prop}}.$$
This says: "If $P$ is a well-formed formula and $Q$ is a well-formed formula, then $P \land Q$ is a well-formed formula." It is different from the rule
$$\frac{\Gamma \vdash P\ \mathsf{holds} \qquad \Gamma \vdash Q\ \mathsf{holds}}{\Gamma \vdash P \land Q\ \mathsf{holds}}$$
which says: "If $P$ is derived from hypotheses $\Gamma$ and $Q$ is derived from hypotheses $\Gamma$, then $P \land Q$ can also be derived from hypotheses $\Gamma$."
To see why it might make sense to have formal rules for establishing that something is a wff, consider the rule:
$$\frac{ }{\Gamma \vdash t = t\ \mathsf{holds}}$$
It does not say that $t$ must be a well-formed term. It would be more precise to say:
$$\frac{\Gamma\ \mathsf{hyps} \qquad t\ \mathsf{term}}{\Gamma \vdash t = t\ \mathsf{holds}}$$
Now the rule is explicit: reflexivity holds only under well-formed hypotheses, and only for well-formed terms. Of course, we need to give rules for $t\ \mathsf{term}$ and $\Gamma\ \mathsf{hyps}$. They would be something like
$$
\frac{ }{()\ \mathsf{hyps}}
\qquad\qquad
\frac{P\ \mathsf{prop} \qquad \Gamma\ \mathsf{hyps}}{(P, \Gamma)\ \mathsf{hyps}}
$$
and (assuming that the language has variables $x_k$, a constant $1$ and a binary operation $+$):
$$
\frac{k \in \mathbb{N}}{x_k\ \mathsf{term}}
\qquad\qquad
\frac{ }{1 \ \mathsf{term}}
\qquad\qquad
\frac{t\ \mathsf{term}\qquad u\ \mathsf{term}}{(t + u)\ \mathsf{term}}
$$
I am fudging details about meta-level natural numbers that are used for indexing variable names. You can devise your own solution (for instance use unary representation of numbers, and give inference rules for such a representation).
It is perhaps silly to follow such rigorous formal discipline for first-order logic, but there are formal systems where the discipline is necessary. Dependent type theory is an example, as well as formal systems describing programming languages. These can grow quite complex, and since they also get implemented in practice, it is quite helpful to have all details written down formally.
Best Answer
The first rule is not the regular disjunction elimination rule, but is known as disjunctive syllogism, and is essentially the modus tollendo ponens rule of term logic. The two rules are mutually admissible in reasonable formulations of classical logic, but the first rule is strictly weaker in intuitionistic logic.