[Math] Extensional theorems mostly used intensionally

ca.classical-analysis-and-odeslo.logicmathematical-philosophy

Some theorems are stated and proved extensionally, but in practice are almost always used intensionally. Let me give an example to make this clear — integration by parts:
$$ \int_a^b f(x)g'(x)ds = \left[f(x)g(x)\right]_a^b – \int_a^b f'(x)g(x) dx$$
for two continuously differentiable functions $f$ and $g$. In practice, this is seldom ever applied to functions but rather to expressions denoting functions. Much more importantly, it is almost always applied by 'pattern matching' on a product term. But note that integration is usually described formally as an operation on functions (i.e. extensional objects), but then in first-year calculus the students are taught to master a series of rewrite rules (i.e. operations on intensional objects).

Logicians [Leibniz, Frege, Russell, Wittgenstein, Quine, Carnap to name a few] have worried a lot about this. Linguists [Montague comes to mind], and physicists [A. Bressan] have worried about this too.

I have two questions:

  1. What other examples have you run into of such mixing of extension and intension?
  2. Why is this dichotomy not more widely taught / appreciated?

In the case of algebra (more precisely, equational theories), the answer to #2 is very simple: because this dichotomy does not matter at all, because we have well-behaved adjunctions between the extensional and intensional theories [in fact, we often have isomorphisms]. For example, there is no essential difference between polynomials (over fields of characteristic 0) treated syntactically or semantically. But there is a huge difference between terms in analysis and the corresponding semantic theorems.

Best Answer

It seems to me that the mathematical equivalent of the intensional vs. extensional distinction in philosophy would be the distinction between "formal" vs. "functional" objects: formal power series vs. convergent power series, formal integration by parts (with no regard for checking the validity of the operation in a real analysis sense) vs. rigorous integration by parts, formal polynomials vs. functions which happen to be represented by a polynomial, etc. If so, I would say that the formal vs. functional distinction is usually dealt with in more advanced classes, though usually not at the first-year undergraduate level.

For instance, in algebra, the concept of an indeterminate variable (and its distinction from the set-theoretic notion of a variable in a fixed domain) tends to be sufficient for keeping the two concepts distinct in most situations involving set-theoretic functions and the formal expressions giving rise to those functions. In particular, polynomials can be formal by living in some polynomial ring $R[x]$ generated by an indeterminate $x$, rather than having to be set-theoretic functions on some domain. Algebraic geometry also takes particular care in distinguishing an ideal of polynomials from the set-theoretic locus that that ideal cuts out over a given field, or more generally by distinguishing a scheme from a variety.

Similarly, real analysis, with all its cautionary counterexamples as to how various formal operations (e.g. exchanging limits or sums) can lead to disaster if the appropriate functional hypotheses are not verified, also tends to be pretty good about distinguishing a formal computation from a functional one; often the former is used as an initial heuristic motivation only, with the latter then being brought in for the rigorous proof. Although certainly mistakes have been made by treating a formal computation as if it were functionally valid...

Related to this is the ubiquitous "abuse of notation" in which a package of objects, structures, and forms is referred to via its most prominent component (i.e. by synecdoche). Thus, for instance, one often sees a polynomial function $P: {\bf R} \to {\bf R}$ being used to simultaneously represent both the polynomial function and the formal polynomial that represents it, or vice versa (e.g. "the polynomial $x^2$" to refer to the function $x \mapsto x^2$). Another common instance of this is when dealing with spaces (sets with additional structure); one often abuses notation by using the set itself to denote the space, e.g. a group might be denoted by its set $G$ of elements, rather than by the tuple $(G, e, \cdot, ()^{-1})$ of group structures, or a set-theoretic function by just the mapping $f$, rather than than the triplet $(f,X,Y)$ that includes the domain and codomain of that mapping. Such abuses are technically illegal using the strictest interpretations of mathematical notation, but they save a lot of space and, when used correctly, allow readers to focus on the actual content of an argument rather than on its formalism. Still, it is useful and important to point these abuses out explicitly from time to time...