[Math] Why are finiteness conditions important (and how to recognize them)

ag.algebraic-geometrybig-picturesoft-question

I think everybody here has met lots of finiteness conditions, like those requiring a vector space to be finite dimensional, an abelian group to be finitely generated, a ring to be Noetherian, a manifold to be compact, a sheaf to be coherent, and a complex to be bounded. And there are lots of good theorems once you assume some finiteness condition. (e.g. the Serre duality and Hodge decomposition for compact Kähler manifolds.) And removing those finiteness conditions seems to be non-trivial and interesting. (This applies to e.g. the two theorems mentioned above.)

So my question is,

why are finiteness conditions so important?

This question baffled me for a long time. I remember before I learnt compactness, when doing proofs in calculus, I felt I needed some finite covering of the closed unit interval, but I somehow thought I should avoid using that in my proof. Even after I learnt compactness for a while, the only thing I felt it gives me, was some "combinatorial advantage" —- I mean, I didn't understand the necessity of assuming compactness in many theorems in elementary analysis, although I was sure I used it in the proofs and I can make some tricky counterexample if compactness wasn't assumed. [I don't feel I got a better understanding on that even now, I can only say I got used to it, i.e. assuming compactness then good things happen.]

The remarks on compactness also apply to me when I first learn the condition of a ring being Noetherian. Somehow the condition looks unnatural to me at the beginning, although after getting used to it I felt examples of non-noetherian rings are crazy.

And one more thing, I think one thing that Hartshorne/EGA make (early-level) readers confused is that they spent lots of time proving finiteness conditions, like proper pushforward of a coherent sheaf is coherent, or the cohomology of a coherent sheaf on a proper scheme over A is a coherent A-module. One can only appreciate them if he/she is sophisticated enough. (If you are about to prove these theorem in your algebraic geometry class, how do you motivate them and describe why people care about them?)

===============

A related question, maybe I should ask this in a separated thread, is, how do we recognize good finiteness conditions? Some are "easy", like compactness, and finite generation. But some are tricky, like the condition of a triangulated category being compactly generated. By recognizing good finiteness conditions one might hope to prove some good theorem, but how do we know whether the conditions are too restrictive or not? (I guess this requires hard work, but is there any convincing sign of a good condition before one dives into the details?) Anybody here knows the history of compactness (for topological spaces) and coherence (for sheave of modules)? [Judging by the name, coherent sheaves may come before quasi-coherent ones.]

Please re-tag it.

Best Answer

The fact that various finiteness conditions lead to good theorems which are manifestly false in their absence seems like a good explanation of why they are important. (In fact, I am having trouble thinking of a wholly different kind of explanation for why anything in pure mathematics is important.)

I think you are on to something to the extent that we need to give nonexamples and counterexamples along with our theorems in order to give students even a fighting chance at appreciating them. In the realm of commutative algebra this was something that was notoriously underappreciated until relatively recently: I recall well Rota writing about the "hygienic theorems" [Rota, Indiscrete Thoughts, pp. 215-216] in algebra, e.g. things like "Every regular domain is normal". As he wrote, we have no chance of grasping results like this unless we see examples -- preferably several -- of domains which are not regular, not normal, and normal but not regular. In this particular example this is easily done, but unfortunately many of the core counterexamples in the subject have a reputation of being too difficult to show beginners. At this point I feel the need to quote directly from p. 136 of Reid's Undergraduate Commutative Algebra:

The catch-phrase "counterexamples due to Akizuki, Nagata, Zariski, etc. are too difficult to treat here" when discussing questions such as Krull dimension and chain conditions for prime ideals, and finiteness of normalisation is a time-honoured tradition in commutative algebra textbooks (comparable to the use of fascist letters $\mathfrak{P}$ and $\mathfrak{m}$ etc., for prime and maximal ideals). This does little to stimulate enthusiasm for the subject, and only discourages the reader in an already obscure literature; I discuss here three counterexamples (taken, with some simplifications, from the famous "unreadable" appendix to [Nagata]) to show some of the ideas involved.

This is very well said (well, except that I honestly don't know what's wrong with $\mathfrak{m}$...): most of the standard texts in commutative algebra leave unanswered the natural questions an alert reader will have: is this hypothesis necessary? is the converse of this result true? What happens if we don't assume that $M$ is a finitely generated module over a Noetherian domain? and so forth.

By a coincidence I have just finished -- that is, within the last half hour -- teaching a first graduate course on commutative algebra. I tried to spend a lot of time on examples, and I was not afraid to make "technical" digressions about what happens when $M$ is not a finitely generated....Especially I spent an extra long amount of time on module-theoretic questions, which made me feel closer to the heart of the subject. It is easy to motivate the need for modules to be finitely generated: there is a structure theorem for finitely generated modules over a PID but there is no structure theorem for infinitely generated abelian groups. The example of $\mathbb{Q}_p$ as a $\mathbb{Z}_p$-module shows that even over a DVR infinitely generated modules can have a complicated structure. Then, when I got to Noetherian rings I motivated them in part by showing that the Noetherian condition was equivalent to many seemingly innocuous and desirable properties, like every submodule of a finitely generated module being finitely generated. At the same time I discussed plenty of examples of non-Noetherian rings, including rings which are very nice "except that they are non-Noetherian" like the ring of all algebraic integers. So I think I gave my students at least an opportunity to feel their way around finiteness conditions in the subject.

Let me add that there are some recent texts which do a much better job at this. Most of all I can enthusiastically recommend T.Y. Lam's Lectures on Modules and Rings. As with all of his books, his skill at balancing theory and examples is superior and makes for very pleasant, stimulating reading.

It goes much the same for compactness in elementary analysis, but it seems easier to me to supply the necessary counterexamples: every time you encounter a theorem which holds on a compact interval $[a,b]$, ask yourself whether it holds on noncompact intervals (and, if applicable, compact non-intervals!). In all the instances I can think of now, such counterexamples are well known and relatively easy to supply.