Understanding some uses of the Separation Lemma

logicmodel-theoryquantifier-elimination

In my Model Theory course, the lecturer introduces the following Separation Lemma:

For $\Sigma,\Pi$ $\mathscr{L}$-theories and $\Gamma\subseteq\text{Sent}(\mathscr{L})$ a set closed under conjunction and disjunction, if for all models $\mathscr{A}\vDash\Sigma$ and $\mathscr{B}\vDash \Pi$ there is some $\gamma\in\Gamma$ such that $\mathscr{A}\vDash\gamma$ and $\mathscr{B}\vDash\neg\gamma$, then there is some $\gamma^\ast\in\Gamma$ such that $\Sigma\vDash\gamma^\ast$ and $\Pi\vDash\neg\gamma^\ast$

That's all well and good, but I don't understand the use of this Lemma in the following two proofs:

Model Completeness

Firstly, it is used in proving an equivalence of model completeness, the part in question is in proving the following: (where $\mathscr{A}\preceq_\exists\mathscr{B}$ means whenever $\mathscr{B}$ models an existential formula, so does $\mathscr{A}$)

If (whenever $\mathscr{A}\subseteq\mathscr{B}$ and $\mathscr{A},\mathscr{B}\vDash T$ then $\mathscr{A}\preceq_\exists\mathscr{B}$), then (every $\mathscr{L}$-formula is $T$-equivalent to an existential formula).

His proof goes as follows (its an induction where the only difficult case is for universal formulae, so he shows every existential formula is equivalent to a universal one):

Let $\phi(v_1,…,v_n)$ be an existential formula. Let $\textbf{c} =(c_1,…,c_n)$ be new constants, let $\Gamma=\text{Sent}_\forall(\mathscr{L}_\textbf{c})$. Take $\mathscr{L}_\textbf{c}$-structures $\mathscr{A}',\mathscr{B'}$ with $\mathscr{L}$-reducts $\mathscr{A},\mathscr{B}$ such that $\text{Th}_\forall(\mathscr{A}')\subseteq\text{Th}_\forall(\mathscr{B}')$. Then (by the method of diagrams) there is some $\mathscr{B}''\equiv\mathscr{B'}$ with $\mathscr{A'}\subseteq\mathscr{B}''$. By the assumption, $\mathscr{A}'\vDash\phi(\textbf{c})$ iff $\mathscr{B}''\vDash\phi(\textbf{c})$ iff $\mathscr{B}'\vDash\phi(\textbf{c})$. So by the Separation Lemma on $T\cup\{\phi(\textbf{c})\}$ and $T\cup\{\neg\phi(\textbf{c})\}$, it follows that there is some $\psi(\textbf{c})\in\Gamma$ such that $T\cup\{\phi(\textbf{c})\}\vDash\psi(\textbf{c})$ and $T\cup\{\neg\phi(\textbf{c})\}\vDash\neg\psi(\textbf{c})$, and so $T\vDash\forall\textbf{v}(\phi(\textbf{v})\leftrightarrow\psi(\textbf{v}))$ and we are done.

I understand how once we get $\psi$ we are done, and all the steps up to the use of the separation lemma, but I don't understand how the separation lemma is usable here. What is the motivation behind any of the steps leading up to it? How have we shown that any two models of $T\cup\{\phi(\textbf{c})\}$ and $T\cup\{\neg\phi(\textbf{c})\}$ are separated by a universal sentence?

More plainly, if $\mathscr{M}\vDash T\cup\{\phi(\textbf{c})\}$ and $\mathscr{N}\vDash T\cup\{\neg\phi(\textbf{c})\}$, what is the universal sentence which is modelled by $\mathscr{M}$ and not by $\mathscr{N}$??

Quantifier Elimination

The second use of the separation lemma is very similar, in the following proof of implication:

If, (for all $\mathscr{A},\mathscr{B}\vDash T$ with a common substructure $\mathscr{C}$ and any simple $\exists$-sentence $\phi$ (so $\phi=\exists v\psi(v)$ where $\psi$ is q.f) then $\mathscr{A}^C\vDash\phi$ iff $\mathscr{B}^C\vDash\phi$ (these are the $\mathscr{L}_\mathscr{C}$-expansions)), then $T$ admits quantifier elimination.

Again, this is a proof by induction, where the quantifier case is the tricky one. Here is the proof:

Let $\phi(\textbf{v})=\exists v\psi(\textbf{v},v)$ where $\psi$ is quantifier free. Introduce new constants $\textbf{c}$ so that $\phi(\textbf{c})=\exists v\psi(\textbf{c},v)$. Let $\mathscr{L}_{\textbf{c}}=\mathscr{L}\cup\{\textbf{c}\}$, let $\Gamma=\text{Sent}_{q.f.}(\mathscr{L}_\textbf{c})$. We want to apply the separation lemma to $T\cup\{\phi(\textbf{c})\}$ and $T\cup\{\neg\phi(\textbf{c})\}$:

Take $\mathscr{L}_\textbf{c}$-structures $\mathscr{A}^\textbf{c},\mathscr{B}^\textbf{c}$ such that $\mathscr{A},\mathscr{B}\vDash T$ and $\text{Th}(\mathscr{A}^\textbf{c})\cap\Gamma=\text{Th}(\mathscr{B}^\textbf{c})\cap\Gamma$. Then for any two closed $\mathscr{L}_\textbf{c}$-terms $\sigma,\tau$, we have $\sigma^{\mathscr{A}^\textbf{c}}=\tau^{\mathscr{A}^\textbf{c}}$ iff $\sigma^{\mathscr{B}^\textbf{c}}=\tau^{\mathscr{B}^\textbf{c}}$, so we may identify the $\mathscr{L}_\textbf{c}$-substructure generated by $\mathscr{L}_\textbf{c}$-terms in $\mathscr{A}^\textbf{c}$ and $\mathscr{B}^\textbf{c}$. It follows by assumption that $\mathscr{A}^\textbf{c}\vDash\phi(\textbf{c})$ iff $\mathscr{B}^\textbf{c}\vDash\phi(\textbf{c})$

Then by the Separation lemma, there is a $\psi(\textbf{c})\in\Gamma$ such that $T\cup\{\phi(\textbf{c})\}\vDash\psi(\textbf{c})$ and $T\cup\{\neg\phi(\textbf{c})\}\vDash\neg\psi(\textbf{c})$, and we are done.

Again, I fail to see how the separation lemma applies here. How have we shown that any two models of $T\cup\{\phi(\textbf{c})\}$ and $T\cup\{\neg\phi(\textbf{c})\}$ are separated by some q.f sentence? Again, if $\mathscr{M}\vDash T\cup\{\phi(\textbf{c})\}$ and $\mathscr{N}\vDash T\cup\{\neg\phi(\textbf{c})\}$, what is the quantifier-free sentence which is modelled by $\mathscr{M}$ and not by $\mathscr{N}$?

TLDR

In both of these proofs, it is proved that, given some $\phi$, $\phi$ is absolute for certain models which respect certain properties in expanded languages. However, I don't see how this helps us fufil the conditions of the separation lemma as given for $T\cup\{\phi(\textbf{c})\}$ and $T\cup\{\neg\phi(\textbf{c})\}$. I'd appreciate if someone could help me understand this!

Best Answer

I think you're just missing some logical reasoning here. I'll try to walk you through it.

In the model completeness example, your goal is to show that there is some $\psi(\mathbf{c})\in \Gamma$ such that $T\cup \{\varphi(\mathbf{c})\}\models \psi(\mathbf{c})$ and $T\cup \{\lnot \varphi(\mathbf{c})\}\models \lnot \psi(\mathbf{c})$.

By the Separation Lemma, it suffices to show that for every model $A\models T\cup \{\varphi(\mathbf{c})\}$ and every model $B\models T\cup \{\lnot \varphi(\mathbf{c})\}$, there is some $\psi(\mathbf{c})\in \Gamma$ such that $A\models \psi(\mathbf{c})$ and $B\models \lnot \psi(\mathbf{c})$.

Let me write this in a different, logically equivalent, way: Suppose $A$ and $B$ are $\mathscr{L}_{\mathbf{c}}$-structures which are both models of $T$. If $A\models \varphi(\mathbf{c})$ and $B\models \lnot \varphi(\mathbf{c})$, then there exists $\psi(\mathbf{c})\in \Gamma$ such that $A\models \psi(\mathbf{c})$ and $B\models \lnot \psi(\mathbf{c})$.

The contrapositive of this is: Suppose $A$ and $B$ are $\mathscr{L}_{\mathbf{c}}$-structures which are both models of $T$. If for all $\psi(\mathbf{c})\in \Gamma$, $A\models \psi(\mathbf{c})$ implies $B\models \psi(\mathbf{c})$ [i.e., if $\mathrm{Th}_\forall(A)\subseteq \mathrm{Th}_\forall(B)$], then $A\models \varphi(\mathbf{c})$ implies $B\models \varphi(\mathbf{c})$.

...and this is exactly what you proved by the method of diagrams! The reasoning in the quantifier elimination example is very similar.