[Math] the Scientific Notation of Zero

algebra-precalculusarithmeticnotationnumber-systemsscientific notation

This question was asked here, where the answer uses this description.

The last line reads:

"The special case of $0$ does not have a unique representation in scientific notation, i.e., $0=0×10^0=0×10^1=…"$

My question is the value of $a$ cannot be $0$ since, as they state:

$a$ is a $\color{blue}{\text{real number}}$ satisfying $1\le|a|<10$.

Thus, zero cannot be written in scientific notation and using their example is misleading.

Could somebody please clarify what benefit there is to defining scientific notation in such a way and give examples none of which are correct representations according to that definition i.e. that zero has multiple representations in scientific notation?

Best Answer

As far as I can see, no definition of scientific notation states that every number must be able to be expressed.

As it so happens, each nonzero number can be expressed in scientific notation. Zero is a special case. There is no way to have $a\cdot 10^k = 0$ for $1\leq |a|<10$ and $k\in\mathbb{Z}$ since $x\cdot y = 0\Leftrightarrow (x=0~ \text{or} ~y=0)$ and $a\neq 0$ and $10^k\neq 0$ for all $k\in\mathbb{Z}$.

Does this cause a problem? Not really. If we ever want to use zero in a setting that we also prefer to write things in scientific notation., we just write $0$ and ignore writing some $10^k$ afterwards. There is nothing wrong with doing that and it does not hinder our ability to understand or use arithmetic or other mathematical tools around it.


tldr: there is no scientific notation for zero in the form $a\cdot10^k$ with $1\leq |a|<10$ and we will always write it simply as $0$. stop reading here if you don't care about more information


Edit: There seems to be some discrepancy in the usage of the terms "scientific notation" and "normalized scientific notation."

Definitions of Scientific Notation

definition 1: Scientific Notation: A number, $x$, that is expressed as $x = a\cdot 10^k$ where $a$ is any real number and $k$ is any integer is expressed in "scientific notation."

This definition has it such that $385\times 10^2, 38.5\times10^3, 3.85\times10^4,0.385\times10^5,$etc... are all valid ways of expressing the number $38500$.

definition 2: Normalized Scientific Notation: A number, $x$, that is expressed as $x = a\cdot 10^k$ where $a$ is a real number with $1\leq |a| < 10$ and $k$ is $\lfloor \log_{10} |x|\rfloor$ is expressed in "normalized scientific notation."

Following the earlier example, $38500$ can be expressed in normalized scientific notation as $3.85\times 10^4$. In many texts (including all elementary texts on my shelf that I looked through), no distinction is made between the definition used above for "scientific notation" and the definition used here for "normalized scientific notation", prompting many people to prefer to always use the definition used in this paragraph no matter the context. (There do exist other standard uses such as engineering notation, but those generally always use a distinct name).


Was Dr. Math wrong?

In response to hvd's comment that I had not properly addressed the question, I would say that Dr. Math had indeed made a mistake in saying "zero has multiple representations in scientific notation" where the definition he linked to was for the definition with $1\leq a<10$ since in fact there exists no representation for zero in that way. Had his link been to a definition of scientific notation which uses the first definition and not the more commonly accepted second definition, he would have been correct as every number (and in particular zero) has in fact multiple representations.


Why is (normalized) scientific notation useful?

As for the usefulness of (normalized) scientific notation, it can be used as a great convenience for very quick mental arithmetic/estimation and avoids the difficulty of needing to move decimals around. Compare the difficulty of the following two calculations: $(24000\times 10^2)\times (.0003\times 10^{-3})$ and $(2.4\times 10^6)\times(3\times 10^{-7})$. In the first case there is a great deal of extra thought involved for how to handle what magnitude it will be in the end, whereas in the second representation it is very quick to notice it will be $7.2\times 10^{-1}$.


Can every nonzero number be written in (normalized) scientific notation?

As mentioned in the comments, every nonzero number which can be represented in decimal notation can be represented in scientific notation. There are examples which cannot be represented in decimal notation which we need to be satisfied with an approximation if we ever wish to write in the form $a_0.a_1a_2a_3a_4...\times 10^k$, for example $\pi\approx 3.141592\dots$. For additional example, $\pi^2$ in scientific notation is $\approxeq 9.869604\times 10^0$

Another example of a number which we currently can't write in (normalized) scientific notation would be Graham's Number. It is far beyond our current ability to figure out what number $k$ we would need to use. That is not to say that it cannot be written in scientific notation. Some godlike being which can comprehend the vast complexities of Graham's Number could conceivably write it all out, since we know it is a finite integer, and all finite integers have a representation in (normalized) scientific notation.


Do there exist multiple representations of a number in scientific notation?

As far as the question of multiple representations, for the lax definition of scientific notation allowing $a$ to be any real number, every number has multiple ways of expressing itself in scientific notation.

For the stricter definition for normalized scientific notation, there is only one choice of $k$ satisfying $\lfloor \log_{10}|x|\rfloor$ since it is a well defined function. In other words, if $10^2\leq x < 10^3$ then in every representation of $x$ in normalized scientific notation it will always be something times $10^3$. In that sense the representations are unique. However, there do exist multiple ways of representing the value for $a$. If we require it be written in decimal form, we could in theory write $3.2\times 10^2 = 3.1\overline{9}\times 10^2$ allowing each number at least two decimal representations. If we further allow $a$ to be representable in any way, not only in decimal notation, you could represent $1\times 10^2 = \frac{2}{2}\times 10^2 = \frac{3}{3}\times10^2 = \dots$ in many ways as well.

Barring all of this, forcing it to be representable in decimal notation, and disallowing repeating 9's, each representable nonzero number will be representable in a unique way.