[Math] Why are structures interesting

abstract-algebralogicmodel-theorysoft-questionuniversal-algebra

Structures are sets together with some constants, relations, and functions on that set. They are studied in many areas of mathematics: For example, universal algebra studies algebraic structures (i.e. structures with no relations, only functions and constants) in general. Also, model theory studies the connection between a formal logic and structures. As a last example, let me mention abstract algebra. This field studies more concrete structures, such as groups, fields, rings, monoids, and so on.

Okay. Structures are very important in mathematics. I am interested in the history and the motivation of the definition of structure.

  • What makes structures so interesting?
  • Why gives us the study of structures so powerful tools?
  • What is the motivation behind the definition of the term structures? (For example, why can't structure have more than one carrier set?)
  • How did the term structure historically develop? What were the first motivating examples?
  • Why did one choose the name "structure"? I personally interpret this word ("structure") as being some kind of pattern. What has this to do with the mathematical notion of structures?

Best Answer

This is really just a comment - see my postscript below - but this is way, way too long:

So the main idea is this: we're looking for an idea which generalizes all or many of the already-very-general kinds of mathematical object studied so far. For instance, groups, rings, fields, vector spaces, etc. Whatever definition we settle on should have two properties:

  • It should be sufficiently general to cover all, or at least a wide variety, of the mathematical objects we've already become interested in.

  • It should be specific enough that we can prove things about it. Too much generality is not inherently good!

Now, let me point out that this question has been answered in different ways! E.g.

  • In the context of universal algebra, we're interested in sets equipped with functions - no relation symbols allowed!

  • In the context of model theory, the definition of structure is as you've given it. Note that this generalizes beyond the universal algebraic setting by allowing relations.

  • What about times when we care about topology? Topological groups, rings, fields etc. make sense, but aren't captured by the classical notion of "structure" from model theory. If you want to talk about topological structures, you need an even more general definition: a topological structure is a structure (in the usual sense) together with a topology on its underlying set. See e.g. the book Continuous Model Theory by Chang and Keisler. Now, maybe we also want to add some compatibility conditions on how the structure and topology interact; this is the approach taken in continuous logic, which has so far been more effective than the unrestricted version (at least, that's my impression).

  • And, of course, sometimes we're interested in things that look like structures "from the outside" - e.g., a group object in some category. We can develop categorical versions of model theory or universal algebra (see e.g. Lawvere theories), which is even more general than what I've described so far.

  • Going in a different direction, we could allow infinitary relations and functions. I believe this was looked at by Addison a long time ago, but I don't have a citation. Note that there are very natural examples of infinitary operations and relations - for example, the relation "converges as an infinite series!"

So that's my pushback against the idea that there's a "right" notion of structure. That said, the classical model-theoretic notion of structure has clearly been extremely useful. So, what made it so?

It's difficult to make a convincing argument here, but let me list a few points in favor of this definition.

  • It is sufficiently general to capture basically every structure studied in (the non-topological parts of) abstract model theory.

  • It provides a semantics for set theory, which in turn lets us talk about the more general approaches. Remember that ZFC, despite talking about the set-theoretic universe, is really a first-order theory! Note that we run into trouble here if we want to talk about e.g. class-sized objects, but I don't really think that's fatal to this idea (see e.g. NBG or universes).

  • This definition is sufficiently narrow that we can prove theorems about it: e.g. compactness, Lowenheim-Skolem, Herbrand's theorem, etc. These theorems in turn have been applied to specific mathematical problems - see e.g. the Ax-Kochen theorem or proof mining. Note that these theorems are really theorems of the underlying logic (first-order logic) rather than the notion of structure per se; but these aren't really that different. The completeness theorem gives a sense in which the notion of structure is in correspondence with the underlying logic. (Speaking of logic, note that we could also ask why first-order logic is the "right" logic to use, and how that decision was made; see e.g. this paper by Ferreiros. Also see Lindstrom's theorem.)

Note that this answer completely avoids any real historical discussion. This reflects my lack of knowledge. While I know a little bit about the philosophical arguments which surrounded the adoption of first-order logic, I know nothing about the history of the definition of "structure." That's why everything in this answer is retrospective: knowing what we know now mathematically, what can we say about "structure"? I hope someone with actual knowledge will give a better answer.

Related Question