To define a logic you need to specify a language of formulas, and then you need to provide either 1) a semantics, or 2) a proof system (i.e. a collection of rules of inference).
For commonly discussed logics, we usually have definitions both in terms of semantics and proof systems, and we have meta-theorems which connect specific pairs of semantics and proof systems. These are the soundness and completeness meta-theorems. Saying "classical propositional logic" means "the logic defined by any of these equivalent semantics or any of these equivalent proof systems" for a specific value of "these semantics/proof systems" e.g. truth table semantics. You can have nonequivalent semantics/proof systems and this leads to different logics. You can also have different languages of formulas and this too can lead to different logics.
Classical, intuitionistic, and minimal logic can be presented so that they use the same language. In fact, they can be presented (in multiple ways) so that the rules of inference given for minimal logic are a strict subset of those given for intuitionistic logic which are in turn a strict subset of those given for classical logic. Modal logic and fuzzy logic have a different language than classical logic.
To talk about a (formal) proof, you need a proof system. What constitutes a proof depends on the proof system and there can be many for a given logic. For a specific logic, these will be necessarily equivalent since they define the same logic. For our purposes, "equivalent" here means that the same collection of formulas have proofs. Those proofs will look different between different proof systems, but the fact that some proof exists will be the same between them.
Hilbert-style, sequent calculi, and natural deduction systems are three "styles" for organizing the rules of a proof system. This Wikipedia page lists dozens of distinct Hilbert-style proof systems. All the proof systems in section 1 are equivalent, but they aren't equivalent to the proof systems in section 3, say. The Wikipedia page for the sequent calculus primarily focuses on LK but presents multiple variations of that. Some equivalent and some, like LJ, nonequivalent. Similarly for the natural deduction page. As illustrated on that page (and as is more generally true), all of these "styles" are applicable to different languages, e.g. the language used by modal logic.
Hilbert-style systems try to reduce the number of rules of inference and replace them with logical axioms. This makes the meta-theory of Hilbert-style systems much simpler but tends to make using them very unpleasant. Sequent calculi and natural deduction systems go the other direction and minimize or even eliminate logical axioms in preference to rules of inference. This allows connectives to be characterized by rules which involve only that connective. This makes these proof systems modular in that you can easily add or remove connectives just by adding/removing the relevant rules. These proof systems make it easier to generic "structural" properties of connectives, rules of inference, and the logic itself whereas everything tends to serve multiple purposes and be interdependent in Hilbert-style systems. The rules of natural deduction systems focus on the formula you're proving which tends to make building proofs fairly natural. In contrast, sequent calculi also focus on assumptions which tends to be unintuitive. On the other hand, building a proof in a natural deduction system has a kind of outside-in feel while in a sequent calculus you build proofs in a bottom-up fashion. This makes sequent calculi easier to prove things about.
Via the Curry-Howard correspondence, each of these styles lead to different ways of relating to type theory/programming. Hilbert-style systems correspond to combinatory logic. Natural deduction systems to typed lambda calculi with constructors and destructors. The sequent calculus case is a bit more subtle and technical.
- (red) The rule makes sense to me for ND but not for SC. In SC it says "if $\Gamma,\varphi$ proves $\Delta$ then $\neg\varphi,\Delta$".
So I guess the comma on the right of $\vdash$ must be read as an OR.
(And comma on the left means AND?)
Yes, that's correct. From the post linked in the comments:
The naive interpretation of a sequent $A_1, \ldots, A_n \vdash B_1, \ldots, B_m$ is that the conjunction of the $A$'s implies the disjunction of the $B$'s:
$\vdash A_1 \land \ldots \land A_n \rightarrow B_1 \lor \ldots \lor B_m$
Using the fact that $A \to B$ is equivalent to $\neg A \lor B$, we can re-write this as:
$\vdash \neg(A_1 \land \ldots \land A_n) \lor B_1 \lor \ldots \lor B_m$
And this is equivalent to
$\vdash \neg A_1 \lor \ldots \lor \neg A_n \lor B_1 \lor \ldots \lor B_m$
All these derivability claims are equivalent in the strong sense that any one derivation can be tarnsformed into one of the others, that is, we have the so-called deduction theorem
$$A_1, \ldots, A_n \vdash B_1, \ldots, B_m \iff \vdash \neg A_1 \lor \ldots \lor \neg A_n \lor B_1 \lor \ldots \lor B_m$$
So a sequent can be thought of as a large disjunction, where the premise formulas are negated and the conclusion formulas are positive.
If a formula occurs on the left-hand side of the sequent, it can be thought of as negated (in the disjunction), and if it occurs on the right-hand side of the sequent, it can be thought of as positive (in the disjunction).
So by switching sides, you effectively negate and unnegate the formula $\phi$: Moving $\phi$ from the (negative) LHS of the sequent to the (positive) RHS gives you $\neg \phi$. Together with the fact that an empty RHS corresponds to an empty disjunction, and the observation that an empty disjunction behaves like a contradiction ($\bot$), this motivates the rule in question.
- (orange) Aff stands for affaiblissement = weakening. So if the R.H.S comma is an OR then I guess there is no problem: "if $\Gamma$
proves $\Delta$ then $\Gamma$ proves $\varphi$ or $\Delta$"
Exactly.
- (yellow) I realize now that this is also OK, since $\varphi$ or $\Delta$ is true and $\neg\varphi$ is in the hypothesis, $\Delta$ must
be true
Yes, see above.
- (blue) In general, SC rules often seem to be the same as ND rules but with $,\Delta$ on the right. Why is that?
That's because sequent calculus, unlike ND, allows for more than one conclusion formula. In general, we can have arbitrarily many formulas $B$ on the right-hand side of our sequent, while rules only manipulate one. To account for the fact that before and after the rule application there might still be other formulas on the RHS of the sequent, we summarize these formulas by $\Delta$. Note that $\Gamma$ and $\Delta$ can be empty.
- SC $\textit{Aff}_g$: I assume that the L.H.S. comma in SC means AND so why from $\Gamma\vdash\Delta$ can we deduce $(\Gamma$ and
$\varphi)\vdash\Delta$? I guess if we know that $\Gamma$ by itself
proves $\Delta$ then knowing $\Gamma$ and $\varphi$ doesn't hurt. It's
just weird because I know that in ND,
$\Gamma,\varphi=\Gamma\cup\{\varphi\}$ (and a L.H.S comma is also
AND). This makes sense but is weird because I'm used to treat the
union of objects as OR (from probability courses)...
Your intuition is right. This property is called monotonicity: If from a set of premises $\Gamma$ we can infer $\Delta$, then adding more knowledge to the premises doesn't destroy that previous knowledge. If from "If I drop my pencil, it will hit the ground" and "I dropped my pencil" I can infer "My pencil hit the ground", then I shouldn't lose that inference just because I additionally know that "Unicorns like asparagus", and neither if I know that "Unicorns don't like asparagus". The apparent contradiction with unions usually being read as disjunctive is hopefully resolved by the fact the formulas on the (negative) LHS of the sequent can be read as a disjunction of negations.
What are the roles of SC and ND in minimal, intuitionistic and
classical logic? As I understand it, min, int, cl. logics use ND. So
what's the point of SC?
And why do we need ND and SC?
These are quite broad question that can't be comprehensively answered within an SE post, so let me just say this much:
Minimal and intuitionistic logic certainly do know sequent calculus; it's just a matter of modfifying the permitted sequents and rules: Sequent calculus for intuitionistic and minimal logic can be obtained by simply restricting oneself to sequents with at most one and, respectively, exactly one formula on the right-hand side, and modifying the rules accordingly.
And while one doesn't "need" more than one syntactic calculus in the sense of guaranteeing the existence of a derivation for any semantic tautology (given that ND and SC for classical logic are equilvalent in this respect by completeness), different calculi have different proof-theoretic properties, and SC has some interesting features about the way derivations are built up and things one can "see" in a proof that ND lacks and vice versa. ND more closely resembles the way mathematicians would argue naturally (hence the name); SC is nice because assumptions are kept locally inside a sequent rather than scattered over leaves in a derivation tree.
A discussion of both can be found in the book linked by Mauro Allegranza in the comments.
The Wikipedia article on sequent calculus also gives a good overview.
Best Answer
Probably your last comment about double negation elimination breaking the symmetry of natural deduction is the 'reason' for the cited quote, though I don't agree with the sentiment.
Basically, natural deduction has introduction and elimination rules for each one of "$\land$" and "$\lor$" and "$\to$", which correspond to their intended meanings. We also have negation introduction in the form "$A \to \bot \vdash \neg A$" and negation elimination in the form "$A , \neg A \vdash \bot$". So double negation elimination (DNE) is an 'odd one out', so to speak, being "$\neg \neg A \vdash A$" or equivalently "$( A \to \bot ) \to \bot \vdash A$". I'm excluding the principle of explosion here because it is redundant in the presence of DNE.
Also, note that you can prove each instance of LEM using DNE plus the other rules, so adding DNE suffices to recover classical logic. If you added LEM instead of DNE, you would also have to add the principle of explosion, namely "$\bot \vdash A$" for any proposition $A$. In that sense DNE can be said to be slightly stronger than LEM. Intuitively it is obviously the case, because we would need at least one rule that reduces the total number of occurrences of "$\neg$" or "$\bot$", and LEM does not do so. Both the principle of explosion and DNE reduce that number, by $1$ and $2$ respectively. But adding just the principle of explosion is not enough to get classical logic, because it only results in intuitionistic logic.
For your other points, your statement that natural deduction is less suitable than sequent calculus for automated proof systems is arguably false. Even though the LK calculus works without the cut rule, it is not useful to take that rule away because it shortens proofs drastically in the same way that allowing on-the-fly definitions makes modern mathematics possible (we can't get far if we're not allowed to define arbitrary new terminology).
And as for the cut rule being implicit in ND via implication introduction and elimination, well is that really implicit? We could modify the other rules appropriately so that none of them use the implication symbol, and then indeed those two rules would become redundant in the same manner as the cut rule. So it looks quite explicit to me. As with LK, it is not useful in practice to take away these rules.