[Math] Why do we (mostly) restrict ourselves to Latin and Greek symbols

conventionnotation

99% of variables, constants, etc. that I run into are named for either a Latin character (like $x$) or a Greek character (e.g. $\pi$). Sometimes I twitch a little when I have to keep two separate meanings for a symbol in my head at once.

Back during the Renaissance and prior, I can see why this would be the case (everybody who set today's conventions for notation spoke Latin or Greek). But it's 2016; thanks to globalization and technological advancements, when symbol overloading becomes an issue one could simply look at the Unicode standard (e.g. through a character map or virtual keyboard) and pick their favorite character.

So why do we (mostly) limit our choice of symbols to Latin and Greek characters?

Best Answer

There isn't a single answer, but:

  • Unicode isn't useful when you're writing notes by hand or at a board.
  • We want symbols to be distinguished from running text but still be familiar and play nicely with other symbols. Unicode symbols that are just Latin letters with a decoration or glyphs from languages very few mathematicians are familiar with are not particularly useful.
  • We do use other symbols: Cyrillic ะจ for the Tate-Shafarevich group of an elliptic curve, Hebrew $\aleph$ for various cardinals, $\pitchfork$ for transverse intersection, whatever $\wp$ is for the Weierstrass function, $\sharp$ and $\flat$ for isomorphisms of the tangent and cotangent bundles in Riemannian geometry, etc.. But:
  • We don't use other symbols that often because using too many of them makes text hard to read. Math isn't computer programming, and math symbols aren't typed or have rigid syntax; ultimately, the goal for a math paper is to have other humans read it.
  • As such, overloading symbols isn't necessarily a problem. It's hardly confusing to use $e$ to denote both the number $2.7182\cdots$ and the identity element of a group. We could introduce a unique symbol for the latter (not $1$, presumably), but how would that be any easier?
  • In the opposite direction, there are a dozen kinds of integrals denoted by the $\int$ symbol, but they're all more or less the same kind of object, so it's not taxing to the reader. The symbol $+$ denotes addition on a vector space, in a module, in a ring, of matrices, in a field, etc.; but it's all ultimately the same kind of operation.