Decidability Results for $\mathcal{L}_{\omega_1, \omega}$

decidabilitylogicreference-request

Recently I was giving a talk on decidability results in logic and I wanted to emphasize that there is more to life than than just First Order Logic. In particular, I made sure to include results such as $\mathsf{MSO}[<]$ and quantifiers like $\exists^\infty$,
$\exists^{n\text{ mod } m}$, etc. because I find those results extremely interesting (and you never know — maybe that extra expressibility will let us solve a problem someday).

In particular, I mentioned infinitary logic as an example of a more expressive language because I think larger conjunctions and disjunctions are pretty easy to understand and act as a good complement to the bonus quantifiers.

Unfortunately I couldn't come up with (or find, on short notice) any decidability results for $\mathcal{L}_{\omega_1, \omega}$. We know that some $\mathsf{MSO}$ theories are decidable, so it doesn't seem too outlandish to think that some $\mathcal{L}_{\omega_1, \omega}$ theories might be too.

Does anybody have any results (ideally with references) in this vein?


Thanks in advance ^_^

Best Answer

There are various things we can say.

Right off the bat though there's a serious type problem. When we talk about decidability without any further elaborations, we're referring to sets of natural numbers, or things which can "naturally" be coded as such (e.g. finite strings of rationals, formulas in the language of set theory, etc.). However, there are continuum-many $\mathcal{L}_{\omega_1,\omega}$-sentences, so right off the bat this is going to break down.

There are two obvious responses to this:

  1. Restrict attention to countable fragments of $\mathcal{L}_{\omega_1,\omega}$.

  2. Use some sort of generalized computability theory to handle the situation.

And in fact we can merge the two approaches:

  1. Study countable fragments of $\mathcal{L}_{\omega_1,\omega}$ using generalized recursion theory.

Each of these three approaches yields interesting results, and in fact it's a bad idea to separate them too much (as indicated by the third). There's too much to put into a single answer, but I'll mention two specific topics which have driven lots of research and are surprisingly accessible. Note that I've backed away from decidability per se; the issues here are so foundational that it's really worth stepping all the way back and seeing how "complexity in $\mathcal{L}_{\omega_1,\omega}$" is developed in the first place.

Overall, I recommend the paper Barwise: infinitary logic and admissible sets as a starting point. While as the title suggests this focuses on the impact of one researcher in particular, it covers a lot of ground - and Barwise really was the most important figure in computability-theoretic infinitary logic, in my opinion. It's also extremely readable.

(The paper Barwise: abstract model theory and generalized quantifiers is also excellent and probably of interest, but less directly related. Ash and Knight's book meanwhile is extremely lovely and relevant, but is really limited to the particular case of $\mathcal{L}_{\omega_1,\omega}\cap L_{\omega_1^{CK}}$, which is not the whole story.)


Scott sentences

Probably the most striking thing about $\mathcal{L}_{\omega_1,\omega}$ right off the bat is Scott's isomorphism theorem, which says that every countable structure (in a countable language) can be "pinned down" by a single $\mathcal{L}_{\omega_1,\omega}$-sentence. Precisely:

Suppose $\mathfrak{A}$ is a countable structure in a countable language $\Sigma$. Then there is a $\mathcal{L}_{\omega_1,\omega}$-sentence $\varphi$ such that for every countable $\Sigma$-structure $\mathfrak{B}$ we have $$\mathfrak{B}\models\varphi\quad\iff\quad\mathfrak{B}\cong\mathfrak{A}.$$ (Note that $\varphi$ could still have models not isomorphic to $\mathfrak{A}$, they'd just have to be uncountable.)

Since countable structures are perfect for computability theory, we run into a bunch of natural questions right off the bat about the difficulty of finding a "Scott sentence" for a given structure. The first big theorem here is that this can force us beyond the computable:

There is a computable structure with no computable Scott sentence.

(Note that "computable $\mathcal{L}_{\omega_1,\omega}$-sentence" does make sense - $\mathcal{L}_{\omega_1,\omega}$-sentences are naturally coded by individual reals, or more clearly labelled well-founded trees, and we can talk about the computability of those codes.) The simplest example of such a thing is the Harrison order - see here.

It turns out that the question of measuring the "Scott complexity" of a computable (or general) structure is surprisingly nuanced - see e.g. here or here. I don't want to go into that here, though.


Fragments and proof systems

Building on work of Lopez-Escobar who looked at the whole $\mathcal{L}_{\omega_1,\omega}$, Barwise introduced the notion of a countable fragment of infinitary logic and showed that countable fragments have extremely nice properties. Specifically, suppose that $\mathbb{A}$ is a countable admissible set, that is, a countable transitive set satisfying $\mathsf{KP}$. Let $\mathcal{L}_{\mathbb{A}}=\mathcal{L}_{\infty,\omega}\cap\mathbb{A}$; that's not a typo, since $\mathbb{A}$ might not realize that each of its elements is countable. Then $\mathcal{L}_{\mathbb{A}}$ satisfies analogues of various properties of first-order logic, like the compactness and completeness theorems - now the Barwise compactness and Barwise completeness theorems. This lets us develop for example an analogue of Godel's incompleteness theorem fro fragments of infinitary logic (see also this old MO question of mine).

I'm glossing over and simplifying a lot here; Barwise's wonderful book is the go-to for all this. But the point is that with a reasonable proof theory in place, "Godel-style" complexity analyses can be carried out smoothly.

Note, however, that the relevant notion of computability is not classical computability; rather, we look at admissible recursion theory (or computability on admissible sets). The idea is that "finite" is replaced with "$\in\mathbb{A}$" and "c.e." is replaced with "$\Sigma_1$ over $\mathbb{A}$" - think about the special case $\mathbb{A}=L_\omega$ to see why this makes sense. This was initially hinted at by Kreisel and Kripke in the context of levels of $L$ specifically, the key breakthrough being that in "one level up" the right analogy is $\Pi^1_1$ = c.e. and $\Delta^1_1$ = finite (these correspond to "$\Sigma_1$-over-$L_{\omega_1^{CK}}$ subset of $\omega$" and "subset of $\omega$ which is an element of $L_{\omega_1^{CK}}$," respectively, and $\omega_1^{CK}$ is the first admissible ordinal $>\omega$).

The standard source on higher recursion theory is Sacks' book, although it really focuses on levels of $L$ (until it gets to $E$-recursion, which is a very different type of thing anyways). Computability on arbitrary admissible sets can actually be quite weird; an admissible set need not have a simply-definable well-ordering of itself, so we can't "index Turing machines" anymore in general.