Functions – Some Confusion About What a Function Really Is

definitionelementary-set-theoryfoundationsfunctionsphilosophy

Despite my username, my background is mostly in functional analysis where (at least to my understanding), a function $f$ is considered as a mathematical object in its own right distinctly different from the values it takes under point evaluation (i.e. $f(x)$). Another way of stating this is that the possible values of a function under evaluation are properties of the function, when considered as its own mathematical object.

However, I am reading a book about the foundations of mathematics by Kunen and he refers to a function as being identified with its graph (i.e. a set of ordered pairs) in axiomatic set theory. I was under the impression that this definition of a function as a set of ordered pairs was an oversimplification that teachers used in high school that one grew out of past calculus.

So anyways, what is the most fundamental definition of a function? Obviously we all (students of mathematics) know what a function is intuitively but formally, I have a hard time swallowing the idea that a function is the same thing as its graph. I realize that the whole point of axiomatic set theory is to make it possible to denominate every mathematical object in terms of sets but I find this definition to be particularly disappointing. I suspect that this is one of those things that just depends on what area one chooses to work in but I'd love to see what thoughts some of the more experienced mathematicians on here can offer.

Best Answer

There are two common "general" definitions for functions in modern mathematics:

  1. A function is a set (or class) of ordered pairs with a particular property: if $(x,y)$ and $(x,z)$ both belong to the function, then $y=z$.

  2. A function is an ordered triplet, $(A,B,f)$ with $f$ being a set of ordered pairs as the previous definition goes, with domain $A$ and range a subset of $B$.

(There can be other definitions, depending on the context.)

These definitions are not "an oversimplification that teachers used in high school", and certainly not something to "grow out of" past calculus. In fact, in most high schools, a student is likely to believe that a function is always given by some sort of formula, whereas in both of these abstract definitions no formula is needed, and in many cases, one cannot even assume there is a formula in the meta-language which defines the graph of the function.

There is no reason to be disappointed by this formalization any more than there is to be disappointed by the implementation of "integer" as a number in a finite range (such as $-2^{31}$ to $2^{31}-1$) when it comes to programming in C. Functions, as well as integers in C, are useful to us and their formalization works out just fine; and whenever we reach some limitations of these formalizations we can bypass them (with class functions in set theory, and with bignum libraries in C, often limited mostly by available memory).

There are two good reasons not to think of functions in terms of "properties":

  1. When formalizing mathematics, one can show that in general one should not expect every mathematical object to have "expressible properties". Namely, given the real numbers, and any "naively reasonable naming mechanism" there will be real numbers which cannot be named. So if we cannot expect every real number to have a discerning property, why should we expect more complicated objects to have them?

  2. The point of set theory as a foundational theory is to implement mathematics at a semantic level. Namely, interpret objects as sets. Having a very simple definition for a set (in a technical sense of "very simple") is a good thing, and it allows the foundations to carry all sort of theorems. For example, when talking about sufficiently simple statements about the natural numbers, one can omit the axiom of choice from the assumptions.

    One of the reasons this can be done is that our interpretation of "function" is sufficiently simple that we can be certain it does not change between our original universe, and an inner universe where the axiom of choice holds. That way we can ensure that if the statement was true in the inner universe, it will remain true in our universe as well. Of course, one could perhaps formalize similar theorems for however you wish to encode functions into sets, but the more complicated the coding mechanism becomes, the more fragile our theorems become, and it gets harder to prove them.

    So simplicity has its benefits when it comes to actually proving things from a foundational point of view.

Related Question