Set Theory – Motivation Behind Generic Filters in Forcing

forcingset-theory

So I was reading about forcing in wikipedia to try to get an intuitive idea about forcing in set theory. There is this paragraph in it:

A subtle point of forcing is that, if $X$ is taken to be an arbitrary "missing subset" of some set in $M$, then the $M[X]$ constructed "within $M$" may not even be a model. This is because $X$ may encode "special" information about $M$ that is invisible within $M$ (e.g. the countability of $M$), and thus prove the existence of sets that are "too complex for $M$ to describe".

Then there is another line about how forcing avoids this problem:

Forcing avoids such problems by requiring the newly introduced set $X$ to be a generic set relative to $M$. Some statements are "forced" to hold for any generic $X$. For example, a generic $X$ is "forced" to be infinite. Furthermore, any property (describable in $M$) of a generic set is "forced" to hold under some forcing condition. The concept of "forcing" can be defined within $M$, and it gives $M$ enough reasoning power to prove that $M[X]$ is indeed a model that satisfies the desired properties.

Here is the definition of a "generic filter"

For $G$ to be "generic relative to $M$" means:
$\bullet$ If $D \in M$ is a "dense" subset of $\mathbb{P}$ $\space$(that is, for each $p \in \mathbb{P}$ , there exists a $q \in D$ such that $q \leq p$), then $G\cap D \neq \emptyset $.

Why is the definition of generic filter designed like this? (intersecting with dense subsets of $\mathbb{P}$ in $M$). What does it do to avoid the problem mentioned in the first paragraph?

Note: I don't have any background in set theory except I read some stuff from Enderton's set theory textbooks.

Best Answer

I think the modern abstract approach to forcing obscures the underlying motivation. Cohen's original approach (as in his monograph Set Theory and the Continuum Hypothesis) is much more concrete and shows why one might think of defining forcing as we do.

I'll briefly describe a hybrid version that shows why the dense open set requirement comes up in the modern approach.

You want to have that, for every sentence $\varphi$ in the language of forcing, there's some condition $p\in G$ which decides $\varphi.$ This is how we ensure that every first-order fact about $M[G]$ depends only on some individual condition being in $G$ and so can be talked about in $M$ (rather than depending on all of $G,$ or on more of $G$ than $M$ can know about).

An equivalent way of phrasing this requirement is that for every $\varphi$ in the language of forcing, $$G\cap\{p\mid p\Vdash \varphi \text{ or }p\Vdash\neg\varphi\}\neq\emptyset.$$

(By the way, I'm oversimplifying a bit here. You need to do all this with an auxiliary notion of "strong forcing," which is defined via a somewhat elaborate transfinite induction, and then you can define the normal notion of forcing, sometimes called "weak forcing," using that. But that doesn't affect the principle here, so I won't delve into those details.)

Note that for every sentence $\varphi$ in the language of forcing, the set $\{p\mid p\Vdash \varphi \text{ or }p\Vdash\neg\varphi\}$ is a dense open set of conditions.

So if you require that $G$ meet every dense open set of conditions in $M,$ it will meet all the sets $\{p\mid p\Vdash \varphi \text{ or }p\Vdash\neg\varphi\},$ which are the dense open sets that we really care about.

But the dense open set requirement is more elegant, in that it can be described simply just based on the partial ordering, algebraically, without having to develop the forcing relation first.

So that's why we look at dense open sets in defining genericity.

Related Question