[Math] formal notion of what we do when we ‘Let X be …’

computational complexityformal-languageslo.logicmetamathematics

This is likely an elementary question to logicians or theoretical computer scientists, but I'm less than adequately informed on either topic and don't know where to find the answer. Please excuse the various vague notions that appear here, without their appropriate formal setting. My question is exactly about what I should have said instead of what follows:

When we make a definition, of either a property, or the explicit value of a symbol, it seems that we are somehow changing the language. Prescribing meaning to a word might be viewed as a kind of transformation of the formal language akin to taking a quotient, where we impose further relations on a set of generators.

I don't know how to describe a 'semantic' object, but am assuming an ad-hoc definition could be as a class of words under an equivalence relation supplied by an underlying logic. If the complexity of such an object is the size of the smallest word in the language that describes it, then making a definition lowers the complexity of some objects (and doesn't change the rest.) The obvious example is that if I add the word group to my language, then saying G is a group is a lot shorter than listing its properties.

It seems that lowering complexity is a main point of making definitions. Further, that one reason mathematical theory-building works, is a compression effect through which I am able to use less resources to describe more complex objects, at the cost of the energy it takes to cram definitions from a textbook.

Likely there is some theory out there that describes this process, but I've not been able to google it. I would appreciate being pointed towards the right source, even if it's a wikipedia link. Specifically:

Where can I find a theory of formal logic or complexity theory that studies the process of adding definitions to a mathematical language, viewed as a transformation that changes complexity?

Best Answer

Kieffer, Avigad, & Frideman, 2008 A language for mathematical knowledge management, which I mentioned in the Proof formalization thread, discusses DZFC, an extension of ZFC with definitions of terms and partial terms. Theorem 1 proves conservativity over ZFC, with respect to which the paper says:

the usual method of eliminating defined function symbols and relation symbols by replacing them by their definiens can result in an exponential increase in length.

Which is roughly in line with some analogous results for other formalisms. Neel mentioned the doubly expontential blow-up for normalisation in the simply typed lambda calculus, and more drastic blowups are possible with higher type systems.

It's unclear to me what notion of complexity is sought, but the expansion in size of terms under expansion will typically be the same as the time complexity of the expansion algorithm.

Related Question