Thank you, Ominvium, for the right answer. Where I made a major mistake is not putting the clues (who said what) into sentences. I only considered each person's statement as a whole, and created sentences that merely captured whether each person was lying or not.
Here is the complete solution (version 1), described by Ominvium. Since the original question is from the Propositional Resolution (PR) section of my notes, the solution below is in that form.
(comments very much appreciated)
\begin{array}{lll}
A & = & \text{Adams did it} \\
B & = & \text{Brown did it} \\
C & = & \text{Clark did it} \\
p & = & \text{Brown knew victim} \\
q & = & \text{Brown was in town } \\
r & = & \text{Clark was in town} \\
t & = & \text{Adams was in town} \\
S_A & = & (\lnot A \land p) \\
S_B & = & (\lnot B \land \lnot p \land \lnot q) \\
S_C & = & (\lnot C \land q \land t) \\
\end{array}
Quick recap on Propositional Resolution (PR)
PR depends on clausal form :
\begin{array}{ll}
(p \lor q) = & \left\{p,q\right\} \\
(p \land q) = & \left\{p\right\} \\
& \left\{q\right\} \\
\end{array}
Key thing to note for this question is that following the PR procedure, if we get an empty clause {}, the sentences contain a contradiction. For example:
\begin{array}{ll}
(p \land \lnot p) = & \left\{ p \right\} \\
& \left\{ \lnot p \right\} \\
& \text{--------} \\
& \left\{\right\}
\end{array}
So in my approach, I took the three scenarios: 1) Adams is lying, 2) Brown is lying, or 3) Clark is lying, and worked through each to see which results in an empty clause.
We have 3 version that we need to check:
1) Adams is lying and the others are telling the truth:
\begin{array}{lll}
\lnot S_A & = \lnot ( \lnot A \land p ) & \qquad\qquad \\
& = ( A \lor \lnot p ) \\
\end{array}
\begin{array}{lll}
(S_B \land S_C ) \land \lnot S_A \\
1) \lnot B \\
2) \lnot p \\
3) \lnot q \\
4) \lnot C \\
5) q \\
6) t \\
7) \left\{A, \lnot p\right\} \\
\text{--------} \\
8) \left\{\right\}& \mbox{(3,5) Contradiction} \\
\end{array}
2) Brown is lying and the others are telling the truth:
\begin{array}{lll}
S_B & = \lnot ( \lnot B \land \lnot p \land \lnot q) & \qquad\qquad \\
& = \lnot ( B \lor p \lor q) & \\
\end{array}
\begin{array}{ll}
(S_A \land S_C ) \land \lnot S_B \\
1) \lnot A \\
2) p \\
3) C \\
4) q \\
5) t \\
6) \left\{B, p, q \right\} \\
\text{--------} \\
7) \mbox{No Contradiction} \\
\end{array}
3) Clark is lying and the others are telling the truth:
\begin{array}{lll}
\lnot S_C & = \lnot ( \lnot C \land q \land t ) & \qquad\qquad \\
& = ( C \lor \lnot q \lor \lnot t ) \\
\end{array}
\begin{array}{lll}
(S_A \land S_B ) \land \lnot S_C \\
1) \lnot A \\
2) p \\
3) \lnot B \\
4) \lnot p \\
5) \lnot q \\
6) \left\{C, \lnot p, \lnot q \right\} \\
\text{--------} \\
7) \left\{\right\} & \mbox{(2,4) Contradiction} \\
\end{array}
Since (1) and (3) have contradictions, only (2) can be true.
$\therefore $ Brown is lying, and is the murderer.
Such problems can be quite hard. I pride myself of being reasonably good at them, but I'm unable to give you more than very vague and highlevel guidelines for how to attack them.
In some cases you may be lucky enough to have a surefire method to fall back to. In particular if you have a constructive proof that the formal system you're using is complete, then this gives you an guaranteed if-everything-else-fails approach: First verify that the formula you have is a tautology, using truth tables, then trace out the steps in the completeness proof as applied to your formula.
The downside is that this method can lead to some humongously long and cumbersome formal proofs, even for pretty innocent-looking conclusions. Trying to be smart first is almost always worth the effort.
What is a general plan for trying to be smart, then? Eventually it comes down to practice and experience. I think, tenatively, that the required experience can be broken into two broad categories.
(1) A good intuitive understanding of logical formulas.
Or preferably several intuitive views of formulas that you have enough practice with to switch between effortlessly, to look at the problem from several sides.
The official semantics of propositional formulas say that they specify how to combine truth values for the propositional letters into a truth value for the entire formula, using truth tables for each of the connectives. That's not wrong, but it's also not very stimulating to the imagination.
I think it is important to know at least one "resource interpretation" for propositional logic, along the lines of:
- Each propositional letter is some kind of primitive resource.
- $\varphi\to\psi$ is a machine that will give you a $\psi$ if you put a $\varphi$ into it.
- $\varphi\land\psi$ is a boxed set containing a $\varphi$ and a $\psi$.
- $\varphi\lor\psi$ is a mystery box that contains either a $\varphi$ or a $\psi$. You're allowed to look into it to see what it is. When you need to make a $\varphi\lor\psi$ yourself, you get to decide to whether to put a $\varphi$ or a $\psi$ into a box.
- $\neg\varphi$ is a genie who always wanted a $\varphi$ and will grant you unlimited wishes if you give one to him. Or perhaps it is a snooty condescending genie who's convinced that $\varphi$s don't exist and is willing to bet you anything that you can't show him one.
Then a proof of $\varphi$ is the same as a recipe for making a $\varphi$.
For example, here you're asked to prove $(\neg\alpha\to\beta)\to(\neg\beta\to\alpha)$. My first step would be to get a mental model of that that means in the resource picture.
Hmm, I need to construct a machine that gets a $\neg\alpha\to\beta$ and produces another machine ... so that's the same as making a machine that gets a $\neg\alpha\to\beta$ and a $\neg\beta$ and must produce an $\alpha$. I have to figure out how I would combine those two components to get the required output.
This doesn't immediately help, because there seems to be nowhere to start. The $\neg \beta$ genie would give me the desired $\alpha$ if I provided him a $\beta$, which I could get the $\beta$ from $\neg\alpha\to\beta$, but then I need a $\neg\alpha$. Hmm.
This is where having several perspectives available helps, because -- back to the truth-value picture -- $\alpha$ is either true or false. If it's true, then I'm done immediately; if it's false then I have $\neg\alpha$ and can trade that for a $\beta$ and finally for a $\alpha$, and I'll be done.
So if the system proves the law of excluded middle, it means in the resource picture I can get, for free, a box that will contain either $\alpha$ or $\neg\alpha$. Open the box; if it's an $\alpha$ inside output that; but if it $\neg\alpha$ stick it into the $\neg\alpha\to\beta$ machine, and give the $\beta$ that comes out to the $\neg\beta$ genie. Then wish for an $\alpha$ you can output.
Unfortunately that doesn't really help us because the axioms you're using here don't seem to make it particularly easy to prove the law of excluded middle (actually as quoted it doesn't even know what $\lor$ is). And that brings us to the second kind of experience we need:
(2) Experience in working with the formal system in question.
This is where we would file (2a) "if I understand how to do something in the resource picture, then how do I express that recipe in form of a proof in this particular system?". It's also where we need to answer (2b) "if I need to do something that's not possible in the constructive resource picture alone, then what means do I get for escaping it?" -- such as appealing to excluded middle, or double-negation elimination, or Peirce's law.
Of these (2a) is mostly a matter of practice with constructing proofs for the system. In your system, MP and the laws (L1), (L2) and (L4) are there essentially to be able to express manipulations of things into and out of machines, construction of machines that implement a given recipe and so forth. HS also seems to be in this category (I don't know what that stands for, but in your example it appears to be a shorthand rule for connecting the output of one machine to the input of another).
Also covered in (2a) is how to deal with negation in the system, which unfortunately varies a good deal more between systems than dealing with implication does.
All in all (2a) can be reasonably gained by doing a lot of exercises in the system. Unfortunately the skill this practice will result in is pretty much useless for anything but solving this kind of exercises, which only ever happens in formal-logic classrooms. In contrast, the experience I collected in (1) is in my experience pretty useful even outside classroom settings.
For (2b) there's some generally useful knowledge about common kinds of ways to escape the resource picture in proofs. In more technical terms, this is about knowing a variety of laws that can be added to intuititionistic logic to produce classical logic. This is where I went astray above, because my first idea was to use excluded middle, but excluded middle is not a primitive law in the particular system we're working with.
(I've kept that detour in the answer as an illustration of the kind of false starts one should be prepared to make a lot of before finding something that works).
A different escape hatch is double-negation elimination, your rule (L5). But there doesn't seem to be any simple way to bring that to bear on $(\neg\alpha\to\beta)\to(\neg\beta\to\alpha)$.
On the other hand, (L3) is clearly a form of contraposition, and $(\neg\alpha\to\beta)\to(\neg\beta\to\alpha)$ is also contraposition-like -- it's only a difference of where negations go. I also know, from long experience, that (L3) is a form of contraposition that's not intuitionistically valid, which makes (L3) begin to look promising indeed.
If I want to conclude $\neg\beta\to\alpha$ from (L3), I will need to supply it with $\neg\alpha\to\neg\neg\beta$. But all I have is $\neg\alpha\to\beta$. But -- back to truth values for a second here -- we know that $\neg\neg\beta$ is the same as $\beta$, so $\neg\alpha\to\neg\neg\beta$ is the same as $\neg\alpha\to\beta$.
This we can do in the resource picture. We want to construct a machine for $\neg\alpha\to\neg\neg\beta$. What this machine does is to get a $\neg\alpha$, run it trough the $\neg\alpha\to\beta$ we already have, and convert the resulting $\beta$ into $\neg\neg\beta$ using (L6), which conveniently does just that.
And this turns out to work. Getting these intuitive "kinematic" ideas about moving things into and out of machines transformed into the particular proof you quote is now just an application of category (2a) skills. The lines annotated (L1), (L2), MP, HS are part of standard idioms for tying things together; the creative content is in using (L3) and (L6).
As for how to learn those standard idioms for this particular system, I have no better advice than to look up the proof of the Deduction Theorem for the system (hopefully you have one) and working through exactly what it does in a fair number of examples.
Sorry if this answer is a bit rambling. I'm trying my best to give some kind of inside view of how a problem such as this would be attacked from a "quite experienced" standpoint. This may or may not be helpful to you -- the most general point to take home, I think, is that it is important to have many ways to think about logical formulas available, and being proficient in using all of them.
Best Answer
There are good answers already, but one note:
Another way to understand the choice of the first three axioms you quote is that they are exactly what you need in order to be able to prove the Deduction Theorem by induction on the length of the inner proof.
$H\to H$ takes care of a step in the original proof that just applies the hypothesis.
$A \to (H \to A)$ allows you to translate a step in the original proof that introduces a logical axiom.
$(H \to (P\to Q)) \to ((H \to P) \to (H\to Q))$ is what you need to translate an application of Modus Ponens.
Actually it turns out that $H\to H$ can be derived from the two others (though that is not particularly intuitive), so in many presentations it will be left out.
There are many different ways to complete these three axioms such that you can prove exactly all of the propositional tautologies that can be written using $\neg$ and $\to$. The one you're quoting has the advantage of being reasonably simple and intuitively obvious, while still being sufficient to allow all tautologies to be proved.
Together, the Deduction Theorem and Modus Ponens tell us a lot about how the $\to$ connective works, but it turns out there are propositional tautologies written with $\to$ alone that cannot be proved by these two ingredients. It is quite remarkable that the simple-looking fourth axiom can manage in one breath to tell us everything else there is to know about $\to$, as well as (simultaneously) everything about $\neg$ too. I don't have any good intuitive explanation of how it does that. I can just prove that it does.