[Math] Was lattice theory central to mid-20th century mathematics

ho.history-overviewlattice-theorylattices

Four years ago, I read a book on the history of mathematics up to 1970 or so. It was very interesting up until the end. The last few chapters, though, were on lattices. The author claimed that lattices were taking math by storm, and that soon lattices would be one of the central objects in mathematics, and many other such claims similar to those now made in reference to category theory.

I couldn't find the book online, but I found references to similar ideas in this post:Good lattice theory books?

Question 1: Was lattice theory central to mid-twentieth century mathematics?
Question 2: Is it central now?

Edit: A simpler question with more definite answers: did any mathematicians in the mid-1900s claim that lattice theory was vital or central to mathematics, and what arguments did they give?

Best Answer

For the "simpler question": yes, in the decade 1930-1940 the (not many) pioneers of lattice theory had big hopes; one can read their hopes in the Bulletin AMS of 1938 for the first symposium in lattice theory, and the introduction to the first edition to Birkhoff's lattice theory book. And then compare these hopes with the subsequent admissions that such big hopes did not materialize: see the introduction to the second edition of Birkhoff's book (and the even more downgraded hopes in the third edition), and the reports at further symposia in lattice theory.

In the remainig part of this answer, comments to parts of other answers are used to show possible reasons of even less hopes for the future. (However, it might also be that fragmentation and decentralization will size down also hopes for category theory or any other presently emphatized subject).

It has been remarked that Rota liked to be provocative; I also will try to be such. Hopefully not too much.

it was not unfashionable at one time to work in lattice theory (if we use the "definition" that important mathematics is the mathematics done by important mathematicians).

Infact even McLane worked around 1930-35 in matroid lattices and semimodularity (exchange axioms); when he invented categories, his interest in lattices had waned out, and he considered lattice theory a too specialized subject with too specialized questions. [More on the opposition categories / lattices below].

Interestingly, one name for lattices in the golden decade was "structures", since one hope was that the structure of groups (and other algebraic constructs) will be fully discovered by studying the lattice of congruences. For finite abelian groups we now know that this works reasonably well, but not perfectly; for finite nonabelian groups one simply looses too much information (all simple groups have the same, trivial, lattice of congruences ...)

But even in that decade, there already was the question of G. D. Birkhoff to his son Garrett: "what can you do with lattices that you cannot do without?" G. Birkhoff found the question unhelpful since he found the lattice language so natural that he could not even think of avoiding it. My half-answer, even now, is to beg the question: with lattices one mainly does not better new things, but a better way to understand and "compact" old things. [How can you place new objects in a room with all space filled up? Free some space by compacting. Perhaps this is the only thing that the two mathematical meanings of the word "lattice" have in common]

This use of lattices is also related to the fact that, even if preorders and lattices appare everywhere, the theory of such structures is not so important: that theory is rarely used to prove things outside itself, one usually prefers other tools and other languages. An extreme analogy: try to explain why mathematics is important to a experienced person who lives without mathematics (except the level of money arithmetic); quite possibly, no changes in the style of life will result.

the notion of a frame generalizes the notion of the lattice of open sets of a topological space. The results and proofs in point-free topology are mostly lattice theoretic.

The first to pubblish about geometries without points was probably von Neumann; his paper in the PNAS cites ponitless ideas by Alexander, but it seems that topologists waited about 30 years or more to publish something about pointless topology.

This gives me a occasion to say that the frames of pointless topology are not true "topological spaces without points". At most they can be considered sober spaces without points, but surely they are not a generalization of topological spaces. To generalize topological spaces, one can take pairs: a complete boolean algebra (that need not be atomic, but when it is it is identified with the set of its atoms, a "pointful substrate") equipped with a subframe (the lattice of open subsets).

However, there is also a difficulty in considering frames as pointless sober spaces: which is the embedding of the frame into a complete Boolean algebra (the pointless set)? Many embeddings are possible, but none of them is universal (adjoint of the forgetful functor, free construction) for the concept of morphism that categorists have predetermined to be the correct one. So it does not matter that there are explicit, concrete, sintactically nice constructions that produce a complete Boolean algebra enlarging a given frame (for example, take the freely generated boolean algebra and then the completion by cuts): none of them will be considered sufficiently good.

On the contrary, a structuralist like me (more on the opposition between categories and structures later on) will consider each of the possible explicit embedding as sufficiently good, each one in its context. In other words, a structuralist (like the founders of Bourbaki) fixes objects (and so isomorphisms among them) and then changes the concept of more general morphisms depending upon the problem to be treated; on the other hand, the categorical point of view has to empathize morphisms over objects (composition of morphisms alone does define a category; objects alone do not); to treat cases where the concept of morphism is changed does not work in the same elegant and economical way as it works a study by means of categories of cases where a fixed concept of morphisms is sufficient.

Unsurprisingly, the way Bourbaki uses to treat its version of the adjoint functor theorem is well suited to cases where morphisms can be changed. Unsurprisingly, McLane did not like it. Unsurprisingly I find Borubaki right and McLane wrong (even if A. Weil himself wrote that categories have a richer mathematical content than Bourbaki's structures).

Perhaps surprisingly, Bourbaki's founders did not not like lattices (compare also what Rota writes about Emil Artin's vision of algebra, in evident resonace with A. Weil. Neither, at the opposide side, Grothendiek liked distributive lattices, but at least his school inmediately re-innvented the needed part of their theory in his own language, in the same way as von Neumann re-invented Lebesgue integration each time he needed something from that theory: for such people, re-inventing is quicker than quoting).

Despite the fact that Bourbaki's founders declared order structures as "mother structures" at the same level as algebraic and topological structures, then they disregarded them except for a few exercises and the general treatement of someting (well orders, Zorn's lemma) in the first book about logic and sets (a book that I consider very nice in its original form [choice in the metateory, ordered pairs as primitive concept, no need of replacement when the objective is only to define structures, ...] and that logicians and set theorists on the contrary do not like at all; see Mathias strong comments). So real numbers are not defined by Bourbaki until a full developement of general topology (with uniform structures and topological groups); the well known definition only using algebra and relatively complete total order was not considered suitable.

So one has that lattice theory is not liked neither by Bourbaki nor by categorists (they use preorders and their special cases almost only as "easy" examples: categories with locally almost the smallest possible hom sets, in the same way as they see monoids as categories with only one object).

Conclusion: with rejection post W.W.II by two such eminent classes of mathematicians, there is no hope of any centrality for lattice theory.

However, the dislike of lattices by Bourbaki is in some way unfortunate, since lattices would provide nice examples of the contrast between the non-categorical view of Bourbaki's founders about the concept of mathematical structure, and the view of modern categorist (categories give the real structural view, they say. I say instead that structural and categorical view are in lucky opposition, being complementary; even if each of the two points of view could be technically subsumed into the other, it is better not to do so and use the different points of view as they are: different. Even categorists admit that some sides of set theory are best viewed not with category theory [examples: recursion for well founded sets; replacement axiom]; the next step would be to extend this to structures, in the cases when isomorphisms are fixed but more general morphisms change with the problem to be studied for the structures. I hope that one day they will aknowledge this. On the other hand, the very definition of structure in Bourbaki is disliked by categorists; I find strange that they do not note that the "scale of sets" used by Bourbaki to define structures is precisely a representation of a free topos in the topos of a set theory)

So, why is it unfortunate that Bourbaki disliked lattices? Because lattices provide a very good example where for morphisms (all with the same iso) there are many different concepts, each useful in a different context. In the same way as for topological spaces one has many kinds of morphisms (continuous maps, continuous open maps, local homeomorphism; closed continuos maps, proper maps; quotient maps; ...) in the same way one has many kinds of morphisms among lattices (or even preorders), some being more algebraic, some relational, some almost-topological: isotone maps (or isotone and antitone maps; also in the sequel morphisms can be extended to include the dual); residuated maps (the archetipal cases of the simplest adjoint situations); [semi]lattice homomorphisms; complete [semi]lattice homomorphisms; maps that preserve finite meets and increasing joins; ...

Besides, Birhkoff was also the father of the concept of critptoisomorphism: essentially, and extension of the concept of definitional equivalence (of first order structures) to higer order structures.

[Categorist do not like this so much: Adamek, Herrlich, Strecker insist that "concrete isomorphisms" of categories are definable, but "conrete equivalences" are not. On the contrary, many cases of cryptoisomorphisms by means of syntactically defined constructions (Hodges, model theory, would say: interpretations or at least word constructuctions, perhaps using parameters in some fixed rigid structure, like natural or real numbers, or the first inaccessible uncoutable cardinal / its Conway surreal numbers ...) do procuce concrete equivalences, factorizable using two retractions of structures towards normalized ones (for example: affine spaces with Hilbert axioms can be normalized to cases where lines and planes are sets of points and incidence is given set theoretically; affine spaces defined by a simply transitive action of a vector space on a set of points can be normalized to the cases where the group of translations is a group of bijections of the set of points, and the skew field of scalars is a subring of endomorphisms of the abelian group of translations) and a concrete isomorphism of categories]

Criptoisomorphism (not only the well knonw ones about matroids) illustrate the complementarity of the structural and categorical points of view. When definitions of apparently very different structures (my favorite example: complemented modular lattices; von Neumann regular rings) are shown to be essentially the same thing (cryptoisomorphism), one has automatically (by the syntactical form of the constructions) a equivalence of "categories of objects" (categories of structures where all morphisms are iso); do they give equivalence for more general kinds of morphisms? The syntactical form of the constructions often gives an automatic answer. In any case, if one has an equivalence for more general kind of morphisms that one would naively use on the two sides, fine; if not, even better , since this means that one has very different points of view to study the same objects. Hence criptoisomorphisms (and the lattice theoretic examples) are a nice complement to Bourbaki's concept of structure, since they explicitly insisted that the concept of isomorphisms is uniquely fixed by the concept of structure, but the concept of morphism is not.

[Warning: from this point of view, the category having as morphisms the homotopy classes of continuous maps between topological spaces is not a category of morphisms between topological spaces (the isomorphisms are more general than the homeomorphisms): it is a category of objects (which are not structures in Bourbaki's definition) strictly weaker than topological spaces, in the same way as metrizable spaces are weaker than metric spaces, completely regular i.e. uniformizable spaces are strictly weaker than uniform spaces, differentiable paracompact manifords are strictly weaker than Riemannian manifolds, ...]

Gelfand lists "nontrivial", in his view, works on lattice theory: Dedekind, Weyl (axiomatization of projective geometry), von Neumann, Gelfand-Ponomarev. I cannot help but quote (translation is mine): "There are two ways of work: lattices and categories. Lattices are more convenient, but discredited by specialists in general algebra".

It seems a bit ironic that Stone duality is not listed by Gelfand :)

Gelfand's quote is wonderful and illuminating (compare also the end of Rota's paper in Birkhoff's memory).

All of Gelfand's examples belong to the theory of modular lattices (really, much more than modular, see below). Stone's duality (and its generalization by Gelfand's duality in the commutative case) on the contrary belongs to the distributive theory.

Two essential aspects of Stone's work are (1) the definitional equivalence between "complemented distributive lattices" and "associative rings with 1 where each element is idempotent" (criptoisomorphic definitions of Boolean algebras); (2) the duality of the preceding structures with the totally disconnected compact T$_2$ spaces (by means of the spectrum construction, in various forms that are equivalent in this particoular case but inequivalent in more general cases: prime ideals; maximal ideals; dispersion-free states; pure states; ...; and finally, the spectrum of the commutative $C^*$-algebra obtained as metric completion of the ring of step-valued functions on the boolean algebra).

In the commutative / distributive / classical case, part (1) is quite trivial. What happens in the irreducibly noncommutative / irreducibly modular nondistributive / irreducibly quantum case? This part becomes von Neumann's coordinatization theorem, a much, much deeper result. [Incidentally, in his book von Neumann has a clear distinction of the three levels: existence of coordinatization ($n>3$), unicity up to isomorphism ($n>2$), unicity of the ring isomorphism that induces a lattice isomorphisms ($n>1$). He had not the concept of category, but he had a crystal clear concept of categorical equivalence at least five years before categories were defined. Unsurprisingly, categorist cite Stone, but not von Neumann, as forerunner of categorical concepts, and they even do not like von Neumann's set theory that used functions as primitive concept]. What happens to part (2)? It becomes the embedding theorem of Frink (complemented case) - Moushi\~no (sectionally semi complemented case) - J\'onsson (extension of Stone's universal property for the embedding). However, this part plays a minor role in the noncommutative theory (note: there is no orthomodular analogue of such a embedding, at least no "easy" one even if not so useful attempts exist); (1) is the only known way to have something in the noncommutative case that resembles Gelfand's equivalence (not duality this time!) between commutative unital $C^*$-algebras and a frame (the open sets of the spectrum) [example: von Neumann's equivalence between finite factors and "continuous geometries with transition probability"]

So, in a sense, to cite Stone duality in place of von Neumann's work would be a restriction to a easy commutative case instead of a pointer towards deeper possibilities of noncommutative measure theory (with von Neumann's algebra and their projection ortholattice), noncommutative topology (using noncommutative $C^*$-algebras), noncommutative geometry, ...

Gelfand's list is not only effective for operator algebras and quantum physics; it is also useful for the more elementary level of linear algebra and geometry.

All categorists known that the following two languages are equivalent:

[1] the language of linear algebra (sums and products): modules over rings

[2] the language of abelian categories.

The equivalence of languages is given by the Freyd - Mitchell embedding theorem (representation of a small abelian category by means of [all module homomorphisms in] a class of modules stable for finite direct sums and kernels, images, cokernel, coimages of homomorphisms). A finite (and quite small) list of axioms (even universal Horn first order) in the language of bicartesian categories is sufficient to imply all the properties of that syntactical kind valid in every "wanted model"; even more, they characterize the "wanted models" up to equivalence.

But there is a third level, equivalent to the first two. It is well forgotten (especially by categorists, see below) and old fasihoned: the level of syntetic geometry. It is a level not liked by Bourbaki (see Dieudonne's book, linear algebra and elementary geometry). You guessed right: it is the level of (some special cases of modular) lattice theory.

G. Hutchinson, independently redescovering the methods used by von Neumann in the first (and easier) part of the coordinatization theorem, proved that what can be done with abelian categories (in its first order language) is exactly what can be done with the following kinds of lattices (abstracting finite dimensional subspaces of a infinite dimensional vector space): modular lattices with 0, where each interval $[a,b]$ is projective (finite chain of prespectivities $[x\wedge y,y]$ to $[x, x\vee y]$) to a initial interval $[0,c]$, and such that each element $x$ can be doubled: there are $z,y$ such that $x,y,z$ generate a 0-sublattice which is a projective line with three points

    .
  / | \ 
x   y   z
  \ | /
    0                                      

i.e. $x,y,z$ are independent with the same join in pairs (each is an axis of perspectivity for the other two, so $x,y,z$ are "equidimensional" with join $x\oplus y=y\oplus z=z\oplus x$ that doubles the dimension).

Three easy and natural axioms in the classical language of syntetic geometry. When two categorists, Carboni and Grandis, did essentially the same thing (to recover an abelian category from the "projective spaces" associated to their objects) they naturally chose to use the categorical language. A full half page in a very specialized paper only to state their axioms, and have only specialists to undestand them. Yes, one can do everything in the categorical language; but this does not mean that the categorical language is the best one for every given task.

To quote Gelfand once again: "There are two ways of work: lattices and categories. Lattices are more convenient, but discredited by specialists in general algebra."

while lattices occur everywhere in mathematics, lattice theory does not seem to be a very popular area of mathematics since for some reason mathematics are not too interested in lattices as algebraic structures.

the problem with lattice theory is that, although lattices appear everywhere in mathematics, they usually appear as objects not as a category.

Usually, we only have a functor that has only lattices in its range.

matroids are "the same thing" as finite atomistic semimodular lattices. But this is not an equivalence of categories -- there is not enough arrows on the lattice side.

At this point you can easily guess my diagnosis: the problem really arises when one expects categories, but instead one gots structures (with fixed isomorphisms, but not with predetermined morphisms). And lattices are a typical case of structures where it is better not to fix a unique type of morphisms, if one want to use these structures at their best.

If and when the "structural" point of view will be no more obfuscated by a categorically forced point of view, perhaps then someone could have again some hope in lattice theory. But surely not the hyperbolic hopes of the decade 1930-1940. But, above all, what is better: to try to do everything in one language (and having Zariski to remind Grothendieck that in old times one was expected to learn more than one language) or try to learn more languages and use them, each one when it seeems more suitable?