[Math] How to a proof by formula induction in a formal language be formalized


From a set of not-so-rigorous lecture notes on Metalogic:

Formulas of $L$:

  • (i) Each sentence letter is a formula.
  • (ii) If $A$ is a formula, then so is $\neg A$.
  • (iii) If $A$ and $B$ are formulas, then so is ($A$ $\wedge$ $B$).
  • (iv) Nothing is a formula unless its being one follows from

A proof by formula induction of some property $P$ of this formal language shows that the property $P$ holds for (i), (ii), (iii) respectively, and therefore holds for all possible formulas by (iv).

For example, a proof of the property $P$ that all formulas $A$ contain the same number of left parentheses as right parenthesis involves 3 steps:

  1. A sentence letter $s$ has no parentheses, and so it vacuously holds, or $0_{left} = 0_{right}$
  2. Assume $P$ holds for $A$, then as $\neg A$ contains the same number of parentheses as $A$, $P$ holds for $\neg A$.
  3. Assume that $P$ holds for $A$ and $B$, then for $(A \wedge B)$ the number of parentheses on the left is $A_{left}+B_{left}+1$ and on the right $A_{right}+B_{right}+1$, and as $A_{left}=A_{right}$, $B_{left}=B_{right}$, and $1=1$, $P$ holds.

The property $P$ of formulas of $L$ is proven by formula induction.

My question is, how can such a property and its proof be formalized in a logical system / calculus, using formal inference rules and axioms?

Best Answer

You can see some textbooks, like :

Definition 2.1.1 The language of propositional logic has an alphabet consisting of

(i) proposition symbols: $p_0,p_1,p_2,\ldots$,

(ii) connectives: $∧,∨,→,¬,↔,⊥$,

(iii) auxiliary symbols: ( , ).

Definition 2.1.2 The set $PROP$ of propositions is the smallest set $X$ with the properties

(i) $p_i \in X (i \in \mathbb N), ⊥ \in X$,

(ii) if $\varphi, \psi \in X$, then $(\varphi \square \psi) \in X$, where $\square$ is one of the connectives $∧,∨,→,↔$.

(iii) if $\varphi \in X$, then $(¬ \varphi) \in X$.

Properties of propositions are established by an inductive procedure analogous to Definition 2.1.2: first deal with the atoms, and then go from the parts to the composite propositions. This is made precise in the following theorem.

Theorem 2.1.3 (Induction Principle) Let $A$ be a property, then $A(\varphi)$ holds for all $\varphi \in PROP$ if

(i) $A(p_i)$, for all $i$, and $A(⊥)$,

(ii) if $A(\varphi),A(\psi)$, then $A((\varphi \square \psi))$,

(iii) if $A(\varphi)$, then $A((¬ \varphi))$.

We call an application of Theorem 2.1.3 a proof by induction on $\varphi$.


Example Each proposition has an even number of brackets. Proof :

(i) Each atom has $0$ brackets and $0$ is even.

(ii) Suppose $\varphi$ and $\psi$ have $2n$, resp. $2m$ brackets, then $(\varphi \square \psi)$ has $2(n + m + 1)$ brackets.

(iii) Suppose $\varphi$ has $2n$ brackets, then $(¬ \varphi)$ has $2(n +1)$ brackets.


In arithmetic one often defines functions by recursion [...]. The justification is rather immediate: each value is obtained by using the preceding values (for positive arguments). There is an analogous principle in our syntax.

Example 1. The number $b(\varphi)$ of brackets of $\varphi$, can be defined as follows:

$b(\varphi) =0$ for $\varphi$ atomic,

$b((\varphi \square \psi)) = b(\varphi)+b(\psi)+2$,

$b((¬ \varphi)) = b(\varphi)+2$.

The value of $b(\varphi)$ can be computed by successively computing $b(\psi)$ for its subformulas $\psi$.

We can give this kind of definition for all sets that are defined by induction. The principle of “definition by recursion” takes the form of “there is a unique function such that ...”. The reader should keep in mind that the basic idea is that one can “compute” the function value for a composition in a prescribed way from the function values of the composing parts.

The general principle behind this practice is laid down in the following theorem.

Theorem 2.1.6 (Definition by Recursion) Let mappings $H_{\square} : A^2 \to A$ and $H_¬ : A \to A$ be given and let $H_{at}$ be a mapping from the set of atoms into $A$, then there exists exactly one mapping $F : PROP \to A$ such that

$F(\varphi) = H_{at}(\varphi)$ for $\varphi$ atomic,

$F((\varphi \square \psi)) = H_{\square}(F(\varphi),F(\psi))$,

$F((¬ \varphi)) = H_¬(F(\varphi))$.


Example 2. The rank $r(\varphi)$ of a proposition $\varphi$ is defined by

$r(\varphi) = 0$ for atomic $\varphi$,

$r((\varphi \square \psi)) = max(r(\varphi), r(\psi)) + 1$,

$r((¬ \varphi)) = r(\varphi)+1$.

We now use the technique of definition by recursion to define the notion of subformula.

Definition 2.1.7 The set of subformulas $Sub(\varphi)$ is given by

$Sub(\varphi) = \{ \varphi \}$ for atomic $\varphi$

$Sub(\varphi_1 \quad \varphi_2) = Sub(\varphi_1) \cup Sub(\varphi_2) \cup \{ \varphi_1 \square \varphi_2 \}$

$Sub(¬ \varphi) = Sub(\varphi) \cup \{ ¬ \varphi \}$.

We say that $\psi$ is a subformula of $\varphi$ if $\psi \in Sub(\varphi)$.

For a formalization into set theory, see :

and the final remark [page 93] :

Working in set theory, the set $\mathcal W$ of symbols can have arbitrary cardinality, but if $\mathcal W$ is finite or countable, it is conventional to assume that $\mathcal W \subseteq HF$. Then [...] all expressions will lie in $HF$ also, [... i.e.] all finite mathematics lives within $HF$ [where $HF$ is the set of hereditarily finite sets (see page 74)].

Related Question