I've never thought of this issue until recently when I've been using Haskell to build a substantial project. In Haskell (and functional programming languages in general), most so-called "variables" are actually mathematical variables, which seem to be immutable by definition.
This made me quite confused. "Variables" in imperative programming languages make perfect sense because they indeed are mutated often. However, such a denomination seems very weird when the thing it represents is actually invariable.
Since the word "variable" originated from mathematics and has been in use for several decades, I'm curious how such an apparent self-contradiction came into existence in the beginning. Did people have a different idea in mind when they first invented the word, and it evolved over time to its current meaning? Was immutability not an important concern at that time such that it was overlooked? Or did I just get it wrong and "variables" in mathematics are actually not totally immutable after all?
Best Answer
For a good (and extremely accessible) overview of the various roles of "variables" in mathematics, see Conceptions of School Algebra and Uses of Variables by Zalman Usiskin. The introduction lays out the terrain quite well:
I will resist the temptation to quote additional large chunks of the Usisikin's text, but the rest of the paper does an exemplary job of distinguishing among the different conceptions of "variable" in mathematics, and how those conceptions relate to different conceptions of what "algebra" is.