Show that a valid rule is not derivable in intuitionistic propositional calculus.

intuitionistic-logiclogicpropositional-calculus

First a word on notation, let the following be an inference rule that takes $\Gamma$ (a set of well-formed formulas) as premises and has $\psi$ (a well-formed formula) as a conclusion.

This is represented as

$$ \text{In:}\;\; \Gamma \\ \text{Out:}\;\; \psi $$

I'm using this two-line notation as a representation for an inference rule as a mathematical object, without ascribing any properties to it. By mentioning an inference rule in this way, I don't intend to make it part of the deductive system by fiat.

I want to show that the following rule is valid but not derivable in intuitionistic propositional calculus, since it is mentioned in passing in this paper, which I'm currently trying to read and make sense of.

$$ \text{In:} \;\; \lnot \alpha \to \beta \lor \gamma \\ \text{Out:} \;\;(\lnot\alpha \to\beta) \lor (\lnot\alpha\to\gamma) $$

Proving that it's valid seems fairly straightforward. I'm using the Heyting algebra semantics described here in an extremely naive way to show this. Let $A, B, G$ be subsets of $\mathbb{R}$ equipped with the standard topology. Let $E$ be defined as $\text{int}(A^c)$. Note that $E = \text{int}(E)$, i.e. $E$ is open. I'm pretty sure that showing that the conclusion is always a superset of the premise demonstrates the validity of the inference rule. The superset property seems like it would force the conclusion to be true when the premises are true, which is what we want out of an inference rule.

$$ \text{int}(E \cup B\cup G) \subset E \cup B \cup G $$
$$ \text{int}(E \cup B \cup G) \subset E \cup B \cup E \cup G $$
$$ \text{int}(E \cup B \cup G) \subset \text{int}(E) \cup B \cup \text{int}(E) \cup G $$
$$ \text{int}(E \cup B \cup G) \subset (\text{int}(E) \cup B) \cup (\text{int}(E) \cup G) $$

And now we unapply the interpretation, giving our rule as desired.

$$ \text{In:} \;\; \lnot \alpha \to \beta \lor \gamma \\ \text{Out:} \;\;(\lnot\alpha \to\beta) \lor (\lnot\alpha\to\gamma) $$

What's a good technique for showing that the rule is not derivable?

Best Answer

Your translation to topological semantics is not quite right. $p\rightarrow q$ translates to $\text{int}(P^{c}\cup Q)$, not $\text{int}(P\cup Q)$. So what you wanted to show was $$\text{int}(\text{cl}(A)\cup B\cup G) \subseteq \text{int}(\text{cl}(A)\cup B) \cup \text{int}(\text{cl}(A)\cup G),$$ using the fact that $E^{c} = (\text{int}(A^c))^c = \text{cl}(A)$. Your proof doesn't work, since it relied on openness of $E$, and $\text{cl}(A)$ is not open in general.

In fact, this inclusion is not true in general. Take $A = \bigcup_{n\in \mathbb{N}^+} (\frac{1}{2n+1},\frac{1}{2n})$, $B = (-\infty,0)$, and $G = (0,\infty)$. Then $0\in \text{int}(\text{cl}(A)\cup B\cup G) = \mathbb{R}$, but $0\notin \text{int}(\text{cl}(A)\cup B) \cup \text{int}(\text{cl}(A)\cup G)$.

Ok, but what does this actually show? It shows that when $\alpha$, $\beta$, and $\gamma$ are propositional constants, $\lnot \alpha\rightarrow (\beta\lor \gamma)\not\models (\lnot \alpha\rightarrow \beta)\lor(\lnot \alpha\rightarrow \gamma)$. Equivalently, by completeness of IPL, $$\not\vdash (\lnot \alpha\rightarrow (\beta\lor \gamma))\rightarrow ((\lnot \alpha\rightarrow \beta)\lor(\lnot \alpha\rightarrow \gamma)).$$ That is, in the language of the paper you linked to, this rule is not derivable.

To show the rule is admissible means something different: Whenever we substitute sentences $\varphi_\alpha$, $\varphi_\beta$, and $\varphi_\gamma$ for $\alpha$, $\beta$, and $\gamma$, respectively, we have $$\text{If }\vdash \lnot \varphi_\alpha\rightarrow (\varphi_\beta\lor \varphi_\gamma),\text{ then }\vdash (\lnot \varphi_\alpha\rightarrow \varphi_\beta)\lor(\lnot \varphi_\alpha\rightarrow \varphi_\gamma).$$ Usually, to show that a rule is admissible, one has to do some induction on proofs. It seems that the original reference for admissibility of this rule is Theorem 3.1 in this paper (Ronald Harrop, Concerning Formulas of the Types $A\rightarrow B\lor C$, $A\rightarrow (Ex)B(x)$, J. Symbolic Logic Volume 25, Issue 1 (1960), 27-32; jstor.)

Related Question