I have not studied category theory in extreme depth, so perhaps this question is a little naive, but I have always wondered if analysis could be taught naturally using categories. I ask this because it seems like a quite a lot of topological and group theoretic concepts can be defined most succinctly using categorical concepts, and the categorical definitions are more revealing. So my question is: (1) Is it possible/beneficial to teach analysis using category theory? and (2) Are there any good textbooks that use this method?
Categorical Perspective – Analysis from a Categorical Perspective
ca.classical-analysis-and-odesct.category-theorymathematics-educationreference-request
Related Solutions
Names in category theory are often born when someone realizes that a concept in one particular topic can be generalized in a categorical way. The generally-defined concept is then named after the original narrowly-defined one.
The case of metric spaces provides a slightly notorious example. As discussed in that other question, metric spaces can be viewed as an example of enriched categories. So, given any concept in metric space theory, you can try to generalize it to the context of enriched categories. This happened with the property of completeness of metric spaces, which one might call Cauchy-completeness since it's about Cauchy sequences. This concept turns out to generalize very smoothly to enriched categories, and to be a useful and important property there.
Many people call the property "Cauchy-completeness" in the general context of enriched categories too. But a significant minority disagree with this choice, feeling that it's stretching the terminology too far. For example, when applied to ordinary (Set-enriched) categories, the property merely says that every idempotent morphism in the category splits. This doesn't "feel" like the completeness condition on metric spaces. So there are other names in currency too, such as "Karoubi complete" (especially popular in the French school).
It's true that many pieces of categorical terminology do come from analysis, but maybe all that says is that analysis is an old and venerable subject. Exact is another example. It's used to mean several slightly different things in category theory, confusingly, but the most common usage is that a functor is "left exact" if it preserves finite limits. Now that comes from homological algebra, where one talks about exact sequences; a functor between abelian categories preserves left exact sequences iff it preserves finite limits. And that in turn, I believe, comes from the terminology of differential equations.
The first, and perhaps most important, point is that hardly any categories that occur in nature are skeletal. The axiom of choice implies that every category is equivalent to a skeletal one, but such a skeleton is usually artifical and non-canonical. Thus, even if using skeletal categories simplified category theory, it would not mean that the subtleties were artifical, but rather that the naturally occurring subtleties could be removed by an artificial construction (the skeleton).
In fact, however, skeletons don't actually simplify much of anything in category theory. It is true, for instance, that any functor between skeletal categories which is part of an equivalence of categories is actually an isomorphism of categories. However, this isn't really useful because, as mentioned above, most interesting categories are not skeletal. So in practice, one would either still have to deal either with equivalences of categories, or be constantly replacing categories by equivalent skeletal ones, which is even more tedious (and you'd still need the notion of "equivalence" in order to know what it means to replace a category by an "equivalent" skeletal one).
In all the other examples you mention, skeletal categories don't even simplify things that much. In general, not every pseudofunctor between 2-categories is equivalent to a strict functor, and skeletality won't help you here. Even if the hom-categories of your 2-categories are skeletal, there can still be pseudofunctors that aren't equivalent to strict ones, because the data of a pseudofunctor includes coherence isomorphisms that may not be identities. Similarly for cloven and split fibrations. A similar question was raised in the query box here: important data can be encoded in coherence isomorphisms even when they are automorphisms.
The argument in CWM mentioned by Leonid is another good example of the uselessness of skeletons. Here's one final one that's bitten me in the past. You mention that universal objects are unique only up to (unique specified) isomorphism. So one might think that in a skeletal category, universal objects would be unique on the nose. This is actually false, because a universal object is not just an object, but an object together with data exhibiting its universal property, and a single object can have a given universal property in more than one way.
For instance, a product of objects A and B is an object P together with projections P→A and P→B satisfying a universal property. If Q is another object with projections Q→A and Q→B and the same property, then from the universal properties we obtain a unique specified isomorphism P≅Q. Now if the category is skeletal, then we must have P=Q, but that doesn't mean the isomorphism P≅Q is the identity. In fact, if P is a product of A and B with the projections P→A and P→B, then composing these two projections with any automorphism of P produces another product of A and B, which happens to have the same vertex object P but has different projections. So assuming that your category is skeletal doesn't actually make anything any more unique.
Best Answer
I hesitate to let this out, but there's always this cute little note that I learned from another MO answer (I don't know which one): https://www.maths.ed.ac.uk/~tl/glasgowpssl/banach.pdf. Maybe this will satisfy your curiosity, but I maintain that it takes a warped mind to identify such a categorical formulation of integration as the "right" way to think about integrals.
The advantage of categorical thinking in my view is that it helps to organize computations and arguments involving several different kinds of structures at the same time. For instance, (co)homology is all about capturing useful invariants associated to a complicated structure (e.g. a geometric object) in a much simpler structure (e.g. an abelian group). When we want to determine how the invariants behave under certain operations on the complicated structure (e.g. products, (co)limits) it helps to have a theory already set up to tell us what will happen to the simpler structure. That's where category theory comes into its own, and instances of this paradigm are so ubiquitous in algebra and topology that category theory has taken on a life of its own. It seems that people working in those areas have found it convenient to build categorical constructions into the foundations of their work in order to emphasize generality (one can treat algebraic varieties and solutions to diophantine equations on virtually the same footing), keep track of different notions of equivalence (e.g. homotopy versus homeomorphism), build new kinds of spaces (e.g. groupoids), and to achieve many other aims.
In many kinds of analysis, this kind of abstraction isn't necessary because there's often only one structure to keep track of: $\mathbb{R}$. When you think about it, analysis is only possible because we are willing to seriously overburden $\mathbb{R}$. Take, for example, the expression "$\frac{d}{dt}\int_X f_t(x) d\mu(x)$" and consider all of the different ways real numbers are being used. It is used as a geometric object (odds are X is built out of some construction involving the real numbers or a subspace thereof), a way to give $X$ additional structure (it wouldn't hurt to guess that $\mu$ is a real valued measure), a parameter ($t$), and a reference system ($f$ probably takes values in $\mathbb{R}$ or something related to it). In algebraic geometry, one would probably take each of these roles seriously and understand what kind of structure they are meant to bring to the problem. But part of the power and flexibility of analysis is that we can sweep these considerations under the rug and ultimately reduce most complications to considerations involving the real numbers.
All that being said, the tools of category theory and homological algebra actually have started to make their way into analysis. Because of the fact that analysts generally consider problems tied to certain very specific kinds of structure, they have historically focused on providing the sharpest and most detailed solutions to their problems rather than extracting the crude, qualitative invariants for which cohomological thinking is most appropriate. However, as analysts have become more and more attuned to the deep relationships between functional analysis and geometry, they have turned to ideas from category theory to help keep things organized. K-theory and K-homology have become indispensable tools in operator theory; there is even a bivariant functor $KK(-,-) $ from the category of C-algebras to the category of abelian groups relating the two constructions, and many deep theorems can be subsumed in the assertion that there is a category whose objects are C-algebras and whose morphism spaces are given by $KK(A,B)$. Cyclic homology and cohomology has also become extremely relevant to the interface between analysis and topology.
So ultimately I think it all comes down to what kinds of subtleties are most relevant in a given problem. There is just something fundamentally different about the kind of thinking required to estimate the propagation speed of the solution operator for a nonlinear PDE compared to the kind of thinking required to relate the fixed point theory in characteristic 0 of a linear group acting on a variety to that in characteristic p.