Forcing – A Better Way to Explain Forcing?

expositionforcinglo.logic

Let me begin by formulating a concrete (if not 100% precise) question, and then I'll explain what my real agenda is.

Two key facts about forcing are (1) the definability of forcing; i.e., the existence of a notion $\Vdash^\star$ (to use Kunen's notation) such that $p\Vdash \phi$ if and only if $(p \Vdash^\star \phi)^M$, and (2) the truth lemma; i.e., anything true in $M[G]$ is forced by some $p\in G$.

I am wondering if there is a way to "axiomatize" these facts by saying what properties forcing must have, without actually introducing a poset or saying that $G$ is a generic filter or that forcing is a statement about all generic filters, etc. And when I say that forcing "must have" these properties, I mean that by using these axioms, we can go ahead and prove that $M[G]$ satisfies ZFC, and only worry later about how to construct something that satisfies the axioms.


Now for my hidden agenda. As some readers know, I have written A beginner's guide to forcing where I try to give a motivated exposition of forcing. But I am not entirely satisfied with it, and I have recently been having some interesting email conversations with Scott Aaronson that have prompted me to revisit this topic.

I am (and I think Scott is) fairly comfortable with the exposition up to the point where one recognizes that it would be nice if one could add some function $F : \aleph_2^M \times \aleph_0 \to \lbrace 0,1\rbrace$ to a countable transitive model $M$ to get a bigger countable transitive model $M[F]$. It's also easy to grasp, by analogy from algebra, that one also needs to add further sets "generated by $F$." And with some more thought, one can see that adding arbitrary sets to $M$ can create contradictions, and that even if you pick an $F$ that is "safe," it's not immediately clear how to add a set that (for example) plays the role of the power set of $F$, since the "true" powerset of $F$ (in $\mathbf{V}$) is clearly the wrong thing to add. It's even vaguely plausible that one might want to introduce "names" of some sort to label the things you want to add, and to keep track of the relations between them, before you commit to saying exactly what these names are names of. But then there seems to be a big conceptual leap to saying, "Okay, so now instead of $F$ itself, let's focus on the poset $P$ of finite partial functions, and a generic filter $G$. And here's a funny recursive definition of $P$-names." Who ordered all that?

In Cohen's own account of the discovery of forcing, he wrote:

There are certainly moments in any mathematical discovery when the resolution of a problem takes place at such a subconscious level that, in retrospect, it seems impossible to dissect it and explain its origin. Rather, the entire idea presents itself at once, often perhaps in a vague form, but gradually becomes more precise.

So a 100% motivated exposition may be a tad ambitious. However, it occurs to me that the following strategy might be fruitful. Take one of the subtler axioms, such as Comprehension or Powerset. We can "cheat" by looking at the textbook proof that $M[G]$ satisfies the axiom. This proof is actually fairly short and intuitive if you are willing to take for granted certain things, such as the meaningfulness of this funny $\Vdash$ symbol and its two key properties (definability and the truth lemma). The question I have is whether we can actually produce a rigorous proof that proceeds "backwards": We don't give the usual definitions of a generic filter or of $\Vdash$ or even of $M[G]$, but just give the bare minimum that is needed to make sense of the proof that $M[G]$ satisfies ZFC. Then we "backsolve" to figure out that we need to introduce a poset and a generic filter in order to construct something that satisfies the axioms.

If this can be made to work, then I think it would greatly help "ordinary mathematicians" grasp the proof. In ordinary mathematics, expanding a structure $M$ to a larger structure $M[G]$ never requires anything as elaborate as the forcing machinery, so it feels like you're getting blindsided by some deus ex machina. Of course the reason is that the axioms of ZFC are so darn complicated. So it would be nice if one could explain what's going on by first looking at what is needed to prove that $M[G]$ satisfies ZFC, and use that to motivate the introduction of a poset, etc.

By the way, I suspect that in practice, many people learn this stuff somewhat "backwards" already. Certainly, on my first pass through Kunen's book, I skipped the ugly technical proof of the definability of forcing and went directly to the proof that $M[G]$ satisfies ZFC. So the question is whether one can push this backwards approach even further, and postpone even the introduction of the poset until after one sees why a poset is needed.

Best Answer

I have proposed such an axiomatization. It is published in Comptes Rendus: Mathématique, which has returned to the Académie des Sciences in 2020 and is now completely open access. Here is a link:

https://doi.org/10.5802/crmath.97

The axiomatization I have proposed is as follows:

Let $(M, \mathbb P, R, \left\{\Vdash\phi : \phi\in L(\in)\right\}, C)$ be a quintuple such that:

  • $M$ is a transitive model of $ZFC$.

  • $\mathbb P$ is a partial ordering with maximum.

  • $R$ is a definable in $M$ and absolute ternary relation (the $\mathbb P$-membership relation, usually denoted by $M\models a\in_p b$).

  • $\Vdash\phi$ is, if $\phi$ is a formula with $n$ free variables, a definable $n+1$-ary predicate in $M$ called the forcing predicate corresponding to $\phi$.

  • $C$ is a predicate (the genericity predicate).

As usual, we use $G$ to denote a filter satisfying the genericity predicate $C$.

Assume that the following axioms hold:

(1) The downward closedness of forcing: Given a formula $\phi$, for all $\overline{a}$, $p$ and $q$, if $M\models (p\Vdash\phi)[\overline{a}]$ and $q\leq p$, then $M\models (q\Vdash\phi)[\overline{a}]$.

(2) The downward closedness of $\mathbb P$-membership: For all $p$, $q$, $a$ and $b$, if $M\models a\in_p b$ and $q\leq p$, then $M\models a\in_q b$.

(3) The well-foundedness axiom: The binary relation $\exists p; M\models a\in_p b$ is well-founded and well-founded in $M$. In particular, it is left-small in $M$, that is, $\left\{a : \exists p; M\models a\in_p b\right\}$ is a set in $M$.

(4) The generic existence axiom: For each $p\in \mathbb P$, there is a generic filter $G$ containing $p$ as an element.

Let $F_G$ denote the transitive collapse of the well-founded relation $\exists p\in G; M\models a\in_p b$.

(5) The canonical naming for individuals axiom: $\forall a\in M;\exists b\in M; \forall G; F_G(b)=a$.

(6) The canonical naming for $G$ axiom: $\exists c\in M;\forall G; F_G(c)= G$.

Let $M[G]$ denote the direct image of $M$ under $F_G$. The next two axioms are the fundamental duality that you have mentioned:

(7) $M[G]\models \phi[F_G(\overline{a})]$ iff $\exists p\in G; M\models (p\Vdash\phi)[\overline{a}]$, for all $\phi$, $\overline{a}$, $G$.

(8) $M\models (p\Vdash\phi)[\overline{a}]$ iff $\forall G\ni p; M[G]\models \phi[F_G(\overline{a})]$, for all $\phi$, $\overline{a}$, $p$.

Finally, the universality of $\mathbb P$-membership axiom.

(9) Given an individual $a$, if $a$ is a downward closed relation between individuals and conditions, then there is a $\mathbb P$-imitation $c$ of $a$, that is, $M\models b\in_p c$ iff $(b,p)\in a$, for all $b$ and $p$.

It follows that $(M, \mathbb P, R, \left\{\Vdash\phi : \phi\in L(\in)\right\}, C, G)$ represent a standard forcing-generic extension: The usual definitions of the forcing predicates can be recovered, the usual definition of genericity can also be recovered ($G$ intersects every dense set in $M$), $M[G]$ is a model of $ZFC$ determined by $M$ and $G$ and it is the least such model. (Axiom $(9)$ is used only in the proof that $M[G]$ is a model).

Related Question