Abstract Algebra – How to Define Division Apart from the Inverse of Multiplication?

abstract-algebrareal numberssoft-question

Sorry if this question is too far out there, but I'm looking for a rigorous definition of the division operation. As I have seen it before, $a/b$ is the solution to the equation $a=xb$. While I am okay with this sort of defintion, it is tantamount to saying that division is the inverse operation of multiplication. (at least that's how I see it, comments on this assertion are welcome.)

Instead, what I seek is a different sort of formalization of the definition of division of real numbers. Whereas the definition above is nearly equivalent to defining division as the inverse operation of multiplication (again, maybe i'm wrong), I seek a definition that does not rely on the definition of multiplication already being asserted.

Motivation: Consider the fundamental theorem of calculus that essentially states that the indefinite integral is the inverse of the derivative. This is a highly non-trivial theorem, and while one may be able to define the indefinite integral as the inverse of the derivative (could be wrong here as well), it is much more meaningful to define the two operations separately and then show that they're actually just inverses of one another. Does anything like this exist for multiplication and division? Or are these operations just too fundamental?

Question: Does there exist a definition for $a/b$ where $a,b \in \mathbb{R}$ that does not require multiplication to already be defined?

I know, I'm grasping at straws, but maybe there is something here that really enhances my understanding of what division is in general. Or maybe there are some interesting definitions involving limits or something that I have never seen before.

Best Answer

One approach is to define a new notion: a Number is a pair of real numbers, $(a, b)$, with $b \ne 0$, and $(a, b)$ is "the same as" $(c, d)$ if $ad = bc$. (The idea here is that the Number $(a, b)$ represents the thing you want to call $a/b$, by the way).

We can add Numbers: $$(a, b) + (p, q) = (aq + pb, bq)$$

and multiply: $$(a, b) * (p, q) = (ap, bq)$$

and it turns out that these operations are independent of representations (i.e., if two Numbers are "the same", then when added to another Number, the results will be "the same"). You have to prove that of course.

Once you've done all this -- shown that the set of equivalence classes of Numbers is the same as the set of reals with addition and multiplication under the correspondence $r \mapsto (r, 1)$ -- you're ready to define division:

$$ (a, b) / (c, d) = (ad, bc). $$

And then you can prove that division has all the properties that you want.

This is really the idea that @Travis is suggesting in his comment, but skipping the visit to $\mathbb Q$.

BTW, Spivak's Caculus, in its last chapter, has a lovely problem about the "high school student's real numbers" in which one defines a real to be a sequence of digits, defines addition and multiplication carefully, and then proves that these form a complete ordered field. In that problem, "division" is essentially the long-division algorithm, and proving that it does the opposite of long-multiplication is a major pain in the neck. Perhaps this is, in some way, what you're seeking.

Post-comment remark

One more thing -- it's worth looking at Edwin Moise's "Elementary Geometry from an Advanced Standpoint", in which one of the later chapters describes the work of Eudoxus (I think!) on defining real numbers via (geometric) ratios. It's quite remarkable work, and hints at the idea of Dedekind cuts that were developed only after a couple of millenia.

Further post-comment remark

Here's the text of problem 29-2 from Michael Spivak's Calculus.

"This problem outlines a construction of "the high-school student's real numbers." We define a real number to be a pair $(a, \{b_n\})$ where $a$ is an integer and $\{b_n\}$ is a sequence of natural numbers from 0 to 9,. with the proviso that the sequence is not eventually $9$; intuitively, this pair represents $$a + \sum_{n = 1}^\infty b_n 10^{-n}.$$ With this definition, a real number is a very concrete object, but the difficulties involved in defining additional and multiplication are formidable (how do you add infinite decimals without worrying about carrying digits infinitely far out?). A reasonable approach is outlined below; the trick is to use least upper bounds right from the start.

(a) Defined $(a, \{b_n\}) << (c, \{d_n\})$ if $a < c$ or if $a = c$ and for some $n$ we have $b_n < d_n$, but $b_j = d_j$ for $1 \le j < n$. Using this definition, prove the least upper bound property.

(b) Given $\alpha = (a, \{b_n\})$, define $\alpha_k = a + \sum_{n = 1}^k b_n 10^{-n}$; intuitively, $\alpha_k$ is the rational number obtained by changing all decimal places after the $k$th to $0$. Conversely, given a rational number $r$ of the form $a + \sum_{n = 1}^k b_n 10^{-n}$, let $r'$ denote the real number $(a, \{b'_n\})$, where $b'_n = b_n$ for $1 \le n \le k$, and $b'_n = 0$ for $n > k.$

Now for $\alpha = (a, \{b_n\})$ and $\beta = (c, \{d_n\})$ define \begin{align} a ++ b &= \sup\{ (\alpha_k + \beta_k)': k \text{ a natural number} \} \end{align}

(the least upper bound exists by part (a)). If multiplication is defined similarly, then the verification of all conditions for a fields is a straightforward task, not highly recommended. Once more, however, existence of multiplicative inverses will be the hardest."

Note: Spivak uses a boldface "<" where I've used $<<$, and a bold plus sign where I've used $++$, but I cannot make those appear here.

So that doesn't really do what you've asked, but I think it gets to the main point: you can do all this stuff more or less algorithmically, at least if you're willing to invoke upper-bound ideas (which is somewhat the trick Eudoxus uses as well, amazingly enough). But mostly it's a pain in the neck. :)

Related Question