On the subject of categorical versus set-theoretic foundations there
is too much complicated discussion about structure that misses the
essential point about whether "collections" are necessary.
It doesn't matter exactly what your personal list of mathematical
requirements may be -- rings, the category of them, fibrations,
2-categories or whatever -- developing the appropriate foundational
system for it is just a matter of "programming", once you understand
the general setting.
The crucial issue is whether you are taken in by the Great Set-Theoretic
Swindle that mathematics depends on collections (completed infinities).
(I am sorry that it is necessary to use strong language here in order to
flag the fact that I reject a widely held but mistaken opinion.)
Set theory as a purported foundation for mathematics does not and cannot
turn collections into objects. It just axiomatises some of the intuitions
about how we would like to handle collections, based on the relationship
called "inhabits" (eg "Paul inhabits London", "3 inhabits N"). This
binary relation, written $\epsilon$, is formalised using first order
predicate calculus, usually with just one sort, the universe of sets.
The familiar axioms of (whichever) set theory are formulae in first order
predicate calculus together with $\epsilon$.
(There are better and more modern ways of capturing the intuitions about
collections, based on the whole of the 20th century's experience of algebra
and other subjects, for example using pretoposes and arithmetic universes,
but they would be a technical distraction from the main foundational issue.)
Lawvere's "Elementary Theory of the Category of Sets" axiomatises some
of the intuitions about the category of sets, using the same methodology.
Now there are two sorts (the members of one are called "objects" or "sets"
and of the other "morphisms" or "functions"). The axioms of a category
or of an elementary topos are formulae in first order predicate calculus
together with domain, codomain, identity and composition.
Set theorists claim that this use of category theory for foundations
depends on prior use of set theory, on the grounds that you need to start
with "the collection of objects" and "the collection of morphisms".
Curiously, they think that their own approach is immune to the same
criticism.
I would like to make it clear that I do NOT share this view of Lawvere's.
Prior to 1870 completed infinities were considered to be nonsense.
When you learned arithmetic at primary school, you learned some rules that
said that, when you had certain symbols on the page in front of you,
such as "5+7", you could add certain other symbols, in this case "=12".
If you followed the rules correctly, the teacher gave you a gold star,
but if you broke them you were told off.
Maybe you learned another set of rules about how you could add lines and
circles to a geometrical figure ("Euclidean geometry"). Or another one
involving "integration by parts". And so on. NEVER was there a "completed
infinity".
Whilst the mainstream of pure mathematics allowed itself to be seduced
by completed infinities in set theory, symbolic logic continued and
continues to formulate systems of rules that permit certain additions
to be made to arrays of characters written on a page. There are many
different systems -- the point of my opening paragraph is that you can
design your own system to meet your own mathematical requirements --
but a certain degree of uniformity has been achieved in the way that they
are presented.
We need an inexhaustible supply of VARIABLES for which we may substitute.
There are FUNCTION SYMBOLS that form terms from variables and other terms.
There are BASE TYPES such as 0 and N, and CONSTRUCTORS for forming new
types, such as $\times$, $+$, $/$, $\to$, ....
There are TRUTH VALUES ($\bot$ and $\top$), RELATION SYMBOLS ($=$)
and CONNECTIVES and QUANTIFIERS for forming new predicates.
Each variable has a type, formation of terms and predicates must respect
certain typing rules, and each formation, equality or assertion of a
predicate is made in the CONTEXT of certain type-assignments and
assumptions.
There are RULES for asserting equations, predicates, etc.
We can, for example, formulate ZERMELO TYPE THEORY in this style. It has
type-constructors called powerset and {x:X|p(x)} and a relation-symbol
called $\epsilon$. Obviously I am not going to write out all of the details
here, but it is not difficult to make this agree with what ordinary
mathematicians call "set theory" and is adequate for most of their
requirements
Alternatively, one can formulate the theory of an elementary topos is this
style, or any other categorical structure that you require. Then a "ring"
is a type together with some morphisms for which certain equations are
provable.
If you want to talk about "the category of sets" or "the category of rings"
WITHIN your tpe theory then this can be done by adding types known as
"universes", terms that give names to objects in the internal category
of sets and a dependent type that provides a way of externalising
the internal sets.
So, although the methodology is the one that is practised by type theorists,
it can equally well be used for category theory and the traditional purposes
of pure mathematics. (In fact, it is better to formalise a type theory
such as my "Zermelo type theory" and then use a uniform construction to
turn it into a category such as a topos. This is easier because the
associativity of composition is awkward to handle in a recursive setting.
However, this is a technical footnote.)
A lot of these ideas are covered in my book "Practical Foundations of
Mathematics" (CUP 1999), http://www.PaulTaylor.EU/Practical-Foundations
Since writing the book I have written things in a more type-theoretic
than categorical style, but they are equivalent. My programme called
"Abstract Stone Duality", http://www.PaulTaylor.EU/ASD is an example of the
methodology above, but far more radical than the context of this question
in its rejection of set theory, ie I see toposes as being just as bad.
Here's an example of something that I think Mumford might advocate in the foundations of mathematics: Solovay's model.
The axiom of choice is generally accepted by mathematicians, but it has always suffered from the nagging problem that it violates certain intuitions we have. Almost all these counterintuitive consequences of the axiom of choice are related in one way or another to the existence of non-measurable sets. (See this related MO question for more information, in particular Ron Maimon's answer.) Solovay's model shows that we can come close to having our cake and eating it too: We can simultaneously have the axioms "all Lebesgue sets are measurable" and the axiom of dependent choice. The former pretty much eliminates all the probabilistic paradoxes while the latter gives us almost all of the "desirable" consequences of the axiom of choice.
The reason that I think this is the sort of thing Mumford might advocate is that Mumford's discussion of Freiling's theorem shows that he really wants to preserve probabilistic intuition even at the expense of jettisoning a well-accepted axiom. In the paper he suggests getting rid of the power-set axiom, but my guess is he was probably not familiar with Solovay's model at the time, and if he were, he would have been favorably disposed towards it.
EDIT: In particular, in Solovay's model, all the following hold: (1) the axioms of ZF, including powerset; (2) all sets are Lebesgue measurable (which is most of what we need to capture probabilistic intuitions); (3) Freiling's axiom of symmetry; (4) the continuum hypothesis in the form "every uncountable subset of
$\mathbb R$ can be put into 1-1 correspondence with
$\mathbb R$." The only price one pays is that the axiom of choice has to be weakened to dependent choice. (Thanks to
Ali Enayat for pointing this out.) My view is that Freiling's argument shows only that probabilistic intuition is incompatible with full-blown AC (which is something we knew already); the continuum hypothesis is a red herring.
For more information about the practical impact of adopting Solovay's model and some speculation on why it hasn't already been adopted widely, see
this MO question and Andreas Blass's answer to
this MO question.
Best Answer
It is quite difficult to answer this question comprehensively. It's a bit like asking "so what's been going on in analysis lately?" It is probably best if logicians who work in various areas each answer what is going on in their area. I will speak about logic in computer science. I am very curious to see what logicians from other areas have to say about their branches of logic.
A hundred years ago logic was driven by philosophical and foundational questions about mathematics. The development of first-order logic and ZFC was one important branch of logic, but others were model theory and computability theory. In fact, computability theory led directly to the invention of modern computers. Ever since logic has had a very fruitful relationship with computer science.
One could argue that applications of logic in computer science are not about foundations of mathematics, but such an opinion can hardly be defended. Of course, many applications of logic in computer science are just that, applications, but not all of them. Some are directly related to foundations of mathematics (see below), while others are about understanding what computation is about (Turing got us on the right track, but there's a lot more that can be said about computation apart from "it's all equivalent to Turing machines"). I hope you can agree that computation is a foundational concept in mathematics, equal to such concepts as (mathematical) structure and deductive method.
We live in an era of logic engineering: we are learning how to apply methods of logic in situations that were not envisioned 100 years ago. Consequently, we need to devise new kinds of logic that are well-suited for new problems at hand. To give just one example: real-world computer systems often work concurrently with each other in an environment which acts in a stochastic fashion. Computer scientists have learned that Turing machines, first-order logic and set theory are not a very good way of describing or analyzing such systems (the computers are of course no more powerful than Turing machines and can be simulated by them, but that's hardly a helpful way to look at things). Instead they devise new kinds of logic which are well suited for their problems. Often we design logics that have good computational properties, so that machines can use them.
Another aspect of logic in computer science, which is closer to a mathematician's heart, is computer-supported formalization of mathematics. Humans are not able to produce complete formal proofs of mathematical theorems. Instead we adhere to what is best described as "informal rigour". Computers however, are very good at checking all the details of a mathematical proof. Modern tools, such as various proof assistants (Coq, HOL, Agda, Lean), are improving at a fast rate, and many believe they will soon become useful tools for the working mathematician. Be that as it may, the efforts to actually do mathematics formally are driving research in how this can be feasibly done. While some proof assistants do use first-order logic and set theory (e.g. Mizar), most use type theory. This is probably not just a fashion. It is a response to the fact that first-order logic and set theory suffice for formalization of mathematics in principle, but not so much in practice. Logicians have been telling us that in principle all mathematics can be formalized. And now that we're actually doing it, we're learning new things about logic. This to my mind is one of the most interesting contermporary developments in foundations of mathematics.