[Math] Which model of computation is “the best”

algorithmscomputational complexity

In 1937 Turing described a Turing machine. Since then many models of computation have been decribed in attempt to find a model which is like a real computer but still simple enough to design and analyse algorithms.

As a result, we have dozen of algorithms for, e.g., SORT-problem for different models of computation. Unfortunately, we even cannot be sure that an implementation of an algorithm with running time $O(n)$ in a word RAM with bit-vector operations allowed will run faster than an an implementation of an algorithm with running time $O(n \cdot \log{n})$ in a word RAM (I am talking about "good" implementations only, of course).

So, I want to understand which of existing models is "the best" for designing algorithms and I am looking for an up-to-date and detailed survey on models of computation, which gives pros and cons of models and their closeness to reality.

Best Answer

While it is the case that many models of computation agree on which functions $\mathbb{N} \to \mathbb{N}$ are computable, I would like to point out that this is not the case when we think of higher-order functions. (I am making this remark not to answer the question but to supplement the existing answers.)

For example, in Gödel's T (simply typed $\lambda$-calculus with booleans, natural numbers and primitive recursion) there is no universal quantifier $all : ((\mathbb{N} \to 2) \to 2) \to 2$, i.e., a map such that $$all(p) = \begin{cases}1 & \text{if $\forall f . p(f) = 1$} \\\\ 0 & \text{otherwise.}\end{cases}$$ But we can write such a quantifier in PCF (simply typed $\lambda$-calculus with booleans, natural numbers and general recursion). Once we have a candidate program $all$, we still have to worry whether it works. The answer again depends on the model of computation.

If we use as the underlying model Kleene's number realizability, i.e., Turing machines which accept and output finite strings of bits, then $all$ does not work because of the Kleene tree. If we use as the underlying model Kleene's function realizability, i.e., Turing machines which accept as input and output infinite strings of bits, including non-computable ones, then $all$ works.

As a second example, let me mention (exact) real number computation. There are two ways to models reals:

  1. Intensionally as a datatype $R_I$ of Cauchy sequences in which each real is represented by (fast) Cauchy sequences of rationals converging to it. In particular, a program may inspect the representation of a real.

  2. Extensionally as an abstract datatype $R_E$ of real numbers where we cannot inspect the representation of the reals. An example of such a language is RealPCF.

It has been known that $R_I$ and $R_E$ represent the same reals, that $R_I^{R_I}$ and $R_E^{R_E}$ represent the same maps, and that $R_I^{R_I^{R_I}}$ and $R_E^{R_E^{R_E}}$ represent the same rank 2 functionals. But recently Matthias Schröder proved that at the next level we have a disagreement between $R_I^{R_I^{R_I^{R_I}}}$ and $R_E^{R_E^{R_E^{R_E}}}$!

Higher-type computation can be quite intriguing, and there it's definitely not the case that "all models of computation are equivalent".