When doing a proof by mathematical induction, I was wondering if there is any logical reason why the assumption (n=k) and induction (n=k+1) steps couldn't be done first, then do the basis case (n=1) afterwards, rather than the traditional way of doing this basis case first? I am teaching this to my students and I think it would make more sense to them to first show that P(k+1) is true if P(k) is true, and then show that P(1) is true, then P(2) must be true, P(3) is true, etc. and P(n) is true for all n. Anyway, I can't see any flaws in the logic? Please advise.
[Math] Any reasons why the basis case can’t be at the end of a mathematical induction proof
inductionproof-writing
Related Solutions
You actually want to show
$$\frac{1}{n+2}+\frac{1}{n+3}+\cdots+\frac{1}{(n+1)+(n+1)}+\frac{1}{(n+1)+(n+2)}\le\frac{5}{6}$$
So you take the inductive hypothesis, and subtract $\frac{1}{n+1}$ from and add $\frac{1}{(n+1)+(n+1)}+\frac{1}{(n+1)+(n+2)}$ to the left hand side. Since you can show that change is less than zero, the overall sum is still less than or equal to $\frac56$.
It is a very interesting question and I'll try to answer to it.
Prelude.
In what follows I am going to talk about logics in a technical sense, i.e. a logic will be specified by a language (i.e. a set of formulas, presented in some way), a set of inference rules (i.e. possibly partial operations from formulas to into formulas) and satisfiability relation (defined between elements called interpretations and formulas).
The language and the inference rules provide the proof system of the logic while the interpretations and the satisfiability relation provide the semantics of the logic.
From these data we get two different notion of consequence in a logic:
- the first one is the $T \models \varphi$, usually called logical consequence, which basically states that every model of $T$ (i.e. every interpretation that satisfies the formulas in $T$) is also a model of $\varphi$
- the second one is $T \vdash \varphi$, usually called derivability, which states that there is a proof (in a proof system considered) of $\varphi$ which uses as unique assumptions the formulas in $T$.
In every good logical systems we have that if $T \vdash \varphi$ holds, i.e. if we have a proof of $\varphi$ from $T$, then $T \models \varphi$ holds as well.
In very good logical systems the converse holds too: i.e. if $T \models \varphi$ then $T \vdash \varphi$ holds too.
In the first case we say that the logic is sound and in the second case that it is complete.
Classical first-order logic is the most well known example of sound and complete system. Classical second-order logic, with Tarski-semantics, is an example of a sound but not complete system.
Let's answer the question. Generally to prove a given formula $\varphi$, assuming a set of axioms $T$, one has simply to provide a proof of $\varphi$ using assumptions in $T$, that is one have to prove $T \vdash \varphi$. This is what one means by a syntactic proof of $\varphi$.
But if you are working in a sound and complete logic you have another way to prove a formula $\varphi$: you could prove $T \models \varphi$, and then using the completeness of the system you get that $T \vdash \varphi$. Methods of this sorts are called semantics because they do not provide directly a proof of $\varphi$, but they provide a proof indirectly via compleness, by proving properties of the interpretations of $T$ and $\varphi$.
So to answer 1
(1) does the distinction hold in mathematics?
Absolutely. The syntactic and semantic methods are different: one work by providing the proof of the statement the other one provide indirectly the proof via a compleness result. Also, semantic methods have a limited application in not complete systems: because in these system you cannot turn logical consequence into derivability, hence you may not get automatically a proof.
About question 2
(2) if it holds, is it possible to classify the mathematical induction method as purely semantic or purely syntactic.
Mathematical induction (I am assuming that you are referring to arithmetic induction to be exact) falls into the realm of syntactic method: it basically involve the use of inference rules to the inductions axioms scheme (for first-order logic, second-order logic uses an induction-axiom instead). When using induction you do not test the formula on models, so it clearly is not a semantic method.
I hope this answer you doubts.
Best Answer
Formally, there is no requirement to prove P(1) first. Practically, though, verifying P(1) first can save a lot of aggravation in cases where the inductive step works, but the base step turns out to not hold true. Try to prove that $2n+1$ is even for $\forall n \in \mathbb{N}$ by induction, for example: the inductive step certainly works, but the proposition is false since the base case $2 \cdot 1 + 1$ turns out to be odd - and you'd only realize that at the very end if you were to check P(1) last.
(And then, there are those funny cases like All horses are the same color where the fallacy falls squarely in between the base step and the proper induction step.)