The definition for semantic completeness is that if $\phi$ is a semantic consequence of some set of propositions $T$ then there is a natural deduction from $T$ to $\phi$. On the other hand syntactic completeness states that every proposition or it's negation is a theorem. What is the reason for these names. Intuitively it would seem that the second notion should be called syntactic completeness since the syntactic consequences "completes" the set of tautologies. Or one could think of being closed under the relation formed by natural deduction.
Reasons for calling completeness semantic and syntactic
definitionlogicpropositional-calculus
Related Solutions
It is a very interesting question and I'll try to answer to it.
Prelude.
In what follows I am going to talk about logics in a technical sense, i.e. a logic will be specified by a language (i.e. a set of formulas, presented in some way), a set of inference rules (i.e. possibly partial operations from formulas to into formulas) and satisfiability relation (defined between elements called interpretations and formulas).
The language and the inference rules provide the proof system of the logic while the interpretations and the satisfiability relation provide the semantics of the logic.
From these data we get two different notion of consequence in a logic:
- the first one is the $T \models \varphi$, usually called logical consequence, which basically states that every model of $T$ (i.e. every interpretation that satisfies the formulas in $T$) is also a model of $\varphi$
- the second one is $T \vdash \varphi$, usually called derivability, which states that there is a proof (in a proof system considered) of $\varphi$ which uses as unique assumptions the formulas in $T$.
In every good logical systems we have that if $T \vdash \varphi$ holds, i.e. if we have a proof of $\varphi$ from $T$, then $T \models \varphi$ holds as well.
In very good logical systems the converse holds too: i.e. if $T \models \varphi$ then $T \vdash \varphi$ holds too.
In the first case we say that the logic is sound and in the second case that it is complete.
Classical first-order logic is the most well known example of sound and complete system. Classical second-order logic, with Tarski-semantics, is an example of a sound but not complete system.
Let's answer the question. Generally to prove a given formula $\varphi$, assuming a set of axioms $T$, one has simply to provide a proof of $\varphi$ using assumptions in $T$, that is one have to prove $T \vdash \varphi$. This is what one means by a syntactic proof of $\varphi$.
But if you are working in a sound and complete logic you have another way to prove a formula $\varphi$: you could prove $T \models \varphi$, and then using the completeness of the system you get that $T \vdash \varphi$. Methods of this sorts are called semantics because they do not provide directly a proof of $\varphi$, but they provide a proof indirectly via compleness, by proving properties of the interpretations of $T$ and $\varphi$.
So to answer 1
(1) does the distinction hold in mathematics?
Absolutely. The syntactic and semantic methods are different: one work by providing the proof of the statement the other one provide indirectly the proof via a compleness result. Also, semantic methods have a limited application in not complete systems: because in these system you cannot turn logical consequence into derivability, hence you may not get automatically a proof.
About question 2
(2) if it holds, is it possible to classify the mathematical induction method as purely semantic or purely syntactic.
Mathematical induction (I am assuming that you are referring to arithmetic induction to be exact) falls into the realm of syntactic method: it basically involve the use of inference rules to the inductions axioms scheme (for first-order logic, second-order logic uses an induction-axiom instead). When using induction you do not test the formula on models, so it clearly is not a semantic method.
I hope this answer you doubts.
Best Answer
The distinction is rooted into the syntax vs semantics of natural language:
In the same way, in formal languages:
Syntactical completeness is defined in terms of "form" of formulas: $\phi$ and $\lnot \phi$, i.e. in terms of grammar, while semantical completeness is defined in terms of interpretations, i.e. meaning of the formulas.