I think it could be done using a 27 state Markov chain process.
I propose using 27 states to represent our knowledge at any given time about the following three: $A$, $B$, $A \implies B$.
$[QQQ]$ means that we do not know whether any of the three are true or false.
$[QTF]$ means that we do not know whether $A$ is true or false, we know that B is true and we know that $A \implies B$ is false.
The states are therefore: $[QQQ]$,$[QQT]$,$[QQF]$,$[QTQ]$,$[QTT]$,$[QTF]$,$[QFQ]$,$[QFT]$,$[QFF]$,$[TQQ]$,$[TQT]$,$[TQF]$,$[TTQ]$,$[TTT]$,$[TTF]$,$[TFQ]$,$[TFT]$,$[TFF]$,$[FQQ]$,$[FQT]$,$[FQF]$,$[FTQ]$,$[FTT]$,$[FTF]$,$[FFQ]$,$[FFT]$,$[FFF]$.
It would be possible to create a transition matrix showing the conclusion that can be drawn from the state you are in at the moment.
So if you start in $[QQQ]$ you will always be in $[QQQ]$. There will be a lot of states like that, where no further information can be gained.
But if you are in $[TQT]$ then you must move to $[TTT]$. And once you are in $[TTT]$ you will stay there for higher powers of the matrix.
States are:
- $[QQQ]$
- $[QQT]$
- $[QQF]$
- $[QTQ]$
- $[QTT]$
- $[QTF]$
- $[QFQ]$
- $[QFT]$
- $[QFF]$
- $[TQQ]$
- $[TQT]$
- $[TQF]$
- $[TTQ]$
- $[TTT]$
- $[TTF]$
- $[TFQ]$
- $[TFT]$
- $[TFF]$
- $[FQQ]$
- $[FQT]$
- $[FQF]$
- $[FTQ]$
- $[FTT]$
- $[FTF]$
- $[FFQ]$
- $[FFT]$
- $[FFF]$
The $27 \times 27$ matrix $M$ has entries $M_{ij}=1$ if being in State $i$ will allow you to move to State $j$.
From Rutten's own article:
[1] If Brigitte has yellow paint and blue paint, then Brigitte can mix
green;
[2] Brigitte can mix green with yellow paint or Brigitte
can mix green with blue paint.
Now, the logical form of this argument is to be formally rendered in
the calculus of propositional logic as follows:
[1*] P ∧ Q →
R;
[2*] (P → R) ∨ (Q → R).
Here P = “Brigitte has yellow
paint”, Q = “Brigitte has blue paint” and R = “Brigitte can mix
green”. As I show in my lecture, the calculus of propositional logic
does in fact render the argument form [1*]-[2*] and thus the
argument [1]-[2] logically valid. This is how I arrived at the
scandal of propositional logic. After all, it is certainly a real
scandal that the calculus of propositional logic forces us to
accept argument [1]-[2] as a valid logical consequence.
There is no scandal, because the given argument [1]$\Rightarrow$[2] is in fact logically invalid, as per our intuition. Rutten has merely wrongly formalised it as [1*]$\Rightarrow$[2*] , so
is mistaken in making the above boldfaced claims.
In propositional calculus, the statement “Brigitte can mix green with yellow paint” is an atomic proposition (let's symbolise it as $Y);$ Rutten has mistranslated it as $P\to R$ (i.e., if Brigitte has yellow paint, then Brigitte can mix green). The correct translation of the given argument is actually $$
P\land Q\to R\implies Y\lor B,$$ which is unsurprisingly invalid.
In first-order logic, due to the ambiguity of the statement “Brigitte can mix green”, the given argument has three possible translations, each of which is unsurprisingly also invalid: \begin{align}\Big(Py\land Pb\to\forall c\:Mgc\Big)\implies Mgy \lor Mgb\tag1\\
\Big(Py\land Pb\to Mgy \lor Mgb\Big)\implies Mgy \lor Mgb\tag2\\
\Big(Py\land Pb\to\exists c\:Mgc\Big)\implies Mgy \lor Mgb.\tag3\end{align}
Regarding the redundancy mentioned in the Question:
the argument $$\Big(P ∧ Q → R\Big)\to\Big((P → R) ∨ (Q → R)\Big)$$ is indeed logically equivalent to the argument $$\Big(P ∧ Q\Big)\to \Big(P ∨ Q\Big),$$ so $R$ is a red herring, I guess.
Best Answer
Basic idea is to then always check x, y, z before A and B. Given the short-circuiting of "and" and "or", here's an expression that will never evaluate A and B more than once:
where the order of precedence from highest to lowest is: not, and, or.