For what it is worth, the Oxford English Dictionary traces monoid in this sense back to Chevalley's Fundamental Concept of Algebra published in 1956. Arthur Mattuck's review of the book in 1957 suggests that this use may be new, or at least new enough to be not in common mathematical parlance.
Edit:
- Indeed, as recently as 1954 we've seen some use of the term "monoid" to mean a semigroup, not necessarily one with identity.
- According to the OED again, the use of the word monoid in algebraic geometry (to denote "a surface which possesses a conical point of the highest possible order") dates back to 1866, and likely predates the use of the same term as semigroup with identity.
I think the following article:
Gregory H. Moore. The axiomatization of linear algebra: 1875-1940. Historia Mathematica, Volume 22, Issue 3, 1995, Pages 262–303
(Available here from Elsevier)
may shed some light on your question, although you may not have enough mathematical experience to understand the entire article. Here is my understanding having browsed the article, but I must stress that I am not a mathematical historian, so please don't quote me!
The idea of an abstract space where an addition is defined between elements and there is a field action (rather than a particular realization as, for instance, $\mathbb{R}^n$ or $C([0,1])$) seems to be due to Peano in 1888, where he called them linear systems. The definition of an abstract vector space didn't catch on until the 1920s in the work of Banach, Hahn, and Wiener, each working separately. Hahn defined linear spaces in order to unify the theory of singular integrals and Schur's linear transformations of series (both employing infinite dimensional spaces). Wiener introduced vector systems which seems to be roughly equivalent to Banach's definition, which was motivated by finding a common framework to understand integral operators (Banach's 1922 paper "Sur les operations dans les ensembles abstraites et leur application aux équations intégrales" is available online and is quite readable) which were defined on champs (domains).
I understand the modern name vector space is popular because of a widely circulated 1941 textbook by Birkhoff and MacLane, A Survey of Modern Algebra, where the term is used.
As Asaf and Hans have indicated in their comments, the motivation for calling such spaces vector spaces is because intuitively, they generalize our understanding of "vectors" (differences between points) in a finite dimensional Euclidean. The motivation for calling such spaces linear spaces is because our ability to add together different elements is the crucial feature which lets us apply the general theory to solve specific problems which are not obviously (to the 1920's eye) about vectors (in particular, in PDE and mathematical physics).
In your course, it is unlikely you will cover material that requires this abstraction, but it is a good habit for later mathematics to work in generality while you maintain your intuition in concrete examples.
Best Answer
The derivative (differential) is defined as the limit of the difference quotient
$$f'(b) = \lim_{a \to b} \frac{f(b) - f(a)}{b-a}$$
where difference quotient refers to the difference of $f(b)$ and $f(a)$ in the numerator and the difference $b$ and $a$ in the denominator.
The derivative is also defined (per Leibniz) as the ratio of differentials $dy$ and $dx$,
$$\frac{dy}{dx} = \lim_{\Delta x \to 0} \frac{\Delta y}{\Delta x}$$
where $dy$ and $dx$ represent infinitesimal changes (differences) in $y$ and $x$, respectively.
As far as history of the term goes, differential was coined by Gottfried Leibniz as described here.
Isaac Newton used the notation $\dot{y}$ to denote the generated rate of change in $y$, which he called a fluxion. Leibniz's notations are generally what are used in calculus today, though Newton's dot notation is still sometimes used for derivatives with respect to time, particularly in physics.