[Math] Premise vs. Assumption

logicproof-writing

I have just asked about the difference between A,B and A∧B in A,B ⊢ M However, I have realized that there is another, intimately related question: what is the difference between implication ⊢ A → B and A ⊢ B? The condition of implication just constrains the valid area, no matter what implication it is. When I say that provided A, I can show that B. Should I denote this implication by ⇒ logical implication →, logical connective, ⊢ or may be ⊨? I see no difference. I do not understand looking at the answers of another question about the difference between different kind of implications.

I can ask from another side. The proofs are written in a column. The premises come first. That is, A ⊢ B is written as

A (premise)
may be some more formulas
B
A ⊢ B

Alternatively, you open a box when make assumption A, prove B under the assumption and, finally, close the assumption box with A → B proven,

$$\begin{bmatrix} A (\text{assumption}) \\
\text{may be some more formulas}\\
B\end{bmatrix} $$
$$A \rightarrow B$$

I do not see any difference Between two proofs at all. A is always a condition, which, when holds, reduces the formula to of two variables, A and B, to formula of single variable B. Instead of A,B,C ⊢ X, why not just simply write A∧B∧C → X = A → (B → (C → X))? This means that under conditions A, B and C we are left with formula X; under condition A, we are left with (B → (C → X).

The proofs are absolutely identical. I do not understand why you call the condition premise in one case and assumption in the other. Just saying about diff in language/metalanguage does not make much sense to me. I see no difference if I switch a condition between assumption and premise.

Update I like Git Gud has aptly noticed that assumption is technically treated identically with the premise in your proof. Actually he told that they are treated differently and you do not have the box around the premise proves conclusion. But obviously it is like saying that white is black. I obviously treat them identically and I do not draw a frame around the premise proves conclusion because this is a top-level box and the frame is omitted as I omit the top-level parenthesis around expression, e.g. I can write (3*(a+b)) but I write simply write 3*(a+b). The fact that I have left the parenthesis around does not mean that these are different expressions. Why do you see them different when it comes to sequents?

Best Answer

Think of well-formed formulae as being objects "inside the system", and the "⊢" and "," as being "outside the system" with the meaning that from the objects (inside the system) on the left we can "derive" the objects on the right. So "$\Rightarrow$" and "$\wedge$" or whatever it is that the system allows in well-formed formulae is different simply because it is inside. The rules outside the system govern how you can derive new objects in the system from previously derived or given objects in the system. So for example one typical rule is "$A$ , $B$ ⊢ $A \wedge B$ for any well-formed formulae $A$ and $B$", which means that given two objects $A$ and $B$ we can get a third object $A \wedge B$.

This is also why we need two special rules in most formal systems, one to instantiate a derived formula in the current context/scope, and another to eliminate a contradiction, both of which should not actually be written using "⊢", otherwise it is a circularly defined rule. I've seen the second written as "$( A ⊢ ( B \wedge \neg B) ) ⊢ \neg A$", but it is in my opinion a very bad practice, since a consistent interpretation of the syntax would then require "⊢" to be inside the system, which would be self-referential. Furthermore, it obscures the existence of the context/scope, which is not good. Likewise I've seen the first rule completely omitted from discussion at all! To get rid of the context/scope, one way is to use a sequent-calculus type of presentation (see http://en.wikipedia.org/wiki/Sequent_calculus), which is suitable for automated provers but not really for humans.

Now for your question of why there should be a difference at all, there are two possible reasons I know that people may give. One is that the "language" used outside the system to describe the rules of the system can be reused to describe different structures that may not be logic systems or may not behave in the same way. But beyond that, I don't see why there cannot be a single self-referential "language" that is sufficiently flexible to be used to describe everything possible. After all, we managed to use some natural language like English to successfully convey what all our mathematical symbols mean...

The second reason is to try to limit the system so that it cannot "talk about itself", and so that we can prove certain meta-theorems about the system. For example we can prove a theorem that logical equivalence of statements in boolean algebra (only logical connectives and no quantifiers) is decidable. Note that this theorem is outside the system and even requires induction, which means that we would need meta-integers and a meta-axiom for meta-induction! But what are the rules for a meta-theory for these meta-theorems? That would be a meta-meta-theory... And so on to infinity. Wait, meta$^\infty$-infinity. So I personally don't see any fundamental reason to accept this reason either.