I am not sure I understand all remarks that Colin made, and I disagree with some of them, but I can comment on the idea of "multiverse". Let us consider the following positions:
- There is a standard mathematical universe.
- There are many mathematical universes.
- There is a multiverse of mathematics.
These are supposed to be informal statements. You may interpret "universe" as "model of set theory ZF(C)" or "topos" if you like, but I don't want to fix a particular interpretation.
I would call the first position "naive absolutism". While many mathematicians subscribe to it on some level, logicians have known for a long time that this is not a very useful thing to do. For example, someone who seriously believes that there is just one mathematical universe would reject model theory as fiction, or at least brand it as a technical device without real mathematical content. Such opinions can actually slow down mathematical progress, as is historically witnessed by the obstruction of "Euclidean absolutism" in the development of geometry.
The second position is what you get if you take model theory seriously. Set theorists have produced many different kinds of models of set theory. Why should we pretend that one of them is the best one? You might be tempted to say that "the best mathematical universe is the one I am in" but this leads to an unbearably subjective position and strange questions such as "how do you know which one you are in?" At any rate, it is boring to stay in one universe all the time, so I don't understand why some people want to stick to having just one cozy universe.
I need to explain the difference between the second and the third position. By "multiverse" I mean an ambient of universes which form a structure, rather than just a bare collection of separate universes. The difference between studying a "collection of universes" and studying a "multiverse" is roughly the same as the difference between Euclid's geometry and Erlangen program -- both study points and lines but conceptual understanding is at different levels. Likewise, a meta-mathematician might prove interesting theorems about models of set theory, or he might consider the overall structure of set-theoretic models.
It should be obvious at this point that there cannot be just one notion of "multiverse". I can think of at least two:
- Set theory studies the multiverse of models of ZF. The structure is studied via notions of forcing, and probably other things I am not aware of.
- Topos theory studies the multiverse of toposes. The structure of the multiverse is expressed as a 2-category of toposes (and geometric morphisms).
I do not mean to belittle set theory, but in a certain sense topos theory is more advanced than set theory because it uses algebraic methods to study the multiverse (I consider category theory to be an extension of algebra). In this sense the formulation of forcing in terms of complete Boolean algebras by Scott and Solovay was a step in the right direction because it brought set theory closer to algebra. Set theorists should learn from topos theorists that transformations between set-theoretic models are far more interesting than the models themselves.
In the present context the question "classical or intuitionistic logic" becomes "what kind of multiverse". If multiverse is "an ambient of universes, each of which supports the development of mathematics" then taking our multiverse to be either too small or to big will cause trouble:
- if the multiverse is too small, we will be puzzled by its ad hoc properties and we will look in vain for overall structure (imagine doing analysis with only rational numbers),
- if the multiverse is too big, its overall structure will be poor and it will include universes whose internal mathematics is too far removed from our own mathematical experience (imagine doing analysis on arbitrary rings--I am sure it's possible but it's unlike classical analysis).
Topos theory gains little by restricting to Boolean toposes. I have never heard a topos-theorist say "I wish all toposes were Boolean". Also, toposes occurring "in nature" (sheaves on a site) typically are not Boolean, which speaks in favor of intuitionistic mathematics.
An example of an ad hoc property in too small a multiverse occurs in set theory. We construct models of set theory by forming Boolean-valued sets which are then quotiented by an ultrafilter. What is the ultrafilter quotient for? The algebraic properties of Boolean-valued sets are hardly improved when we pass to the quotient, not to mention that it stands no chance of having an explicit description. A possible explanation is this: we are looking only at one part of the set-theoretic multiverse, namely the part encompassed by Tarskian model theory. Our limited view makes us think that the ultraquotient is a necessity, but the construction of Boolean-valued models exposes the ultraquotient as a combination of two standard operations (product followed by a quotient). We draw the natural conclusion: a model of classical first-order theory should be a structure that measures validity of sentences in a general complete Boolean algebra. The Boolean algebra $\lbrace 0,1 \rbrace$ must give up its primacy. What shall we gain? Presumably a more reasonable overall structure. At first sight I can tell that it will be easy to form products of models, and that these products will have the standard universal property (contrast this with ultraproducts which lack a reasonable universal property because they are a combination of a categorical limit and colimit). Of course, there must be much more.
I am going to draw heavily from Github discussion on HoTT book pull request 617.
There are different kinds of equality. Let us say that equality is "intensional" if it distinguishes objects based on how they are defined, and "extensional" if it distinguishes objects based on their "extension" or "observable behavior". In Frege's terminology, intensional equality compares sense and extensional equality compares reference.
To use Russell's example, intensionally the morning star and the evening star are clearly not the same (because their definitions are different), but they are extensionally the same because they both denote the same object (planet Venus). A more mathematical example is comparison of $x + y$ and $y + x$. These are extensionally equal, but intensionally differ because (the usual) definition of $+$ treats its arguments asymmetrically. It should be clear that two functions may be extensionally equal (have same behavior) even though they differ intensionally (have different definitions).
It is possible for two kinds of equality to coexist. Thus in type theory there are two equalities. The intensional one is called "judgmental" or "definitional equality" $\equiv$ and the extensional one is known as "propositional equality" $=$. Mathematicians are aware of $=$ as a "real thing" while they think of $\equiv$ as "formal manipulation of symbolic expressions" or some such.
We may control the two kinds of equality and the relationship between them with additional axioms. For instance, the reflection rule collapses $\equiv$ and $=$ by allowing us to conclude $a \equiv b$ from $a = b$ (the other direction is automatic). There are also varying degrees of extensionality of $=$. Without any extra axioms, $=$ is already somewhat extensional. For instance, we can prove commutativity of $+$ on natural numbers by induction in the form $x + y = y + x$, but we cannot prove $x + y \equiv y + x$. Function extensionality is an axiom which describes what constitutes an "observation" on functions: by saying that two functions are equal when they give equal values we are in essence saying that only values matter (but not for example the "running time" or some other aspect of evaluation). Another axiom which makes $=$ "more extensional" is the Univalence axiom.
It is hard to do mathematics without function extensionality, but type theorists have their reasons for not including it as an axiom by default. But before I explain the reason, let me mention that there is a standard workaround. We may introduce user-defined equalities on types by equipping types with equivalence relations. This is what Bishop did in his constructive mathematics, and this is what we do in Coq when we use setoids. With such user-defined equalities we of course recover function extensionality by construction. However, setoids are often annoying to work with, and they drag in technical problems which we would prefer to avoid. Incidentally, the setoid model shows that function extensionality does not increase the power of type theory (it is a model validating function extensionality built in type theory without function extensionality).
So why don't type theorist adopt function extensionality? If we want to have a type theory with nice properties, and a useful proof assistant, then $\equiv$ should be "easy" to deal with. Technically speaking, we would like a strongly normalizing $\equiv$. By assuming function extensionality we throw into type theory a new constant funext
without explaining how it interacts with the process of strong normalization, and things break. Type theorists would say that we failed to explain the computational meaning of funext
.
Consequently, Coq does not adopt function extensionality because that would lead to a lot of problems. Coq would not be able to handle $\equiv$ automagically anymore, and the whole system would just have worse behavior. Type theorists of course recognize that having a good computational explanation of function extensionality, and more generally of the univalence problem, would be extremely desirable. This is why the HoTTest open problem is to give a computational interpretation of the univalence axiom. Once this is accomplished, we ought to have at our disposal type systems and proof assistants which are much more natural from a mathematician's point of view. Until then, you can always assume funext
as an axiom and work around the resulting complications. To see how this can be done, have a loot at the Funext
axiom in the HoTT library.
[This P.S. is outdated after the question was edited.] P.S. The title of your question points to a common leap of reasoning from "not accepting function extensionality" to "denying function extensionality". While there are models in which function extensionality has counter-examples, one should be aware of the difference between "not accept" and "deny". (I am complaining because this sort of leap is often made about the law of excluded middle, and there it leads to absurdity.)
Best Answer
IIRC, the calculus of inductive constructions is equi-interpretable with ZFC plus countably many inaccessibles -- see Benjamin Werner's "Sets in Types, Types in Sets". (This is because of the presence of a universe hierarchy in the CIC.)
As I understand it, the homotopy type theory project does not need (or want?) the full consistency strength of Coq; they make use of it simply because it is one of the better implementations of type theory. Instead, what makes this work interesting is not its consistency strength, but its focus on a wholly new dimension of logical complexity: the complexity of equality (which, utterly amazingly, they relate to homotopy type). It puts me in mind of a famous quotation of Rota:
(This is not meant as criticism of your question, but just as a caution not to let old battles cause us to lose sight of the new innovations brought to us.)
That said, as a sometime constructivist, I do not find consistency to be a primary philosophical notion. If PA is consistent, so is PA+$\lnot$Con(PA). That is, systems can lie despite being consistent, and so I hesitate to build my foundations on consistency. Instead, I prefer a proof-theoretic justification for logical systems, such as Gentzen's proof of cut-elimination. This guarantees that a system is not merely consistent, but that theorems actually have proper proofs. For a good introduction to these ideas, you can hardly do better than Per Martin-Lof's Siena lectures, "On the Meaning of the Logical Constants and the Justification of the Logical Laws".
I should be clear that these are more stringent conditions than consistency, and hence offer no way at all of avoiding the obstacles that the halting/incompleteness theorems pose. Heyting and Peano arithmetic are equi-consistent: there is no reason to trust constructivism more, if consistency strength is all that you are interested in. It is other logical qualities that make constructivism attractive.