We know that 1 meter is the distance travelled by light in vacuum within a time interval of 1/299,792,458 second. My question is why we didn't take a simpler number like 1/300,000.000 or why not just 1?

# [Physics] Why wasn’t the meter defined using a round-number fraction (like 1/300 000 000) of the distance travelled by light in 1 second

historymetrologysi-units

#### Related Solutions

The speed of light is 299 792 458 m/s because people used to define one meter as 1/40,000,000 of the Earth's meridian - so that the circumference of the Earth was 40,000 kilometers.

Also, they used to define one second as 1/86,400 of a solar day so that the day may be divided to 24 hours each containing 60 minutes per 60 seconds.

In our Universe, it happens to be the case that light is moving by speed so that in 1 second defined above, it moves approximately by 299,792,458 meters defined above. In other words, during one solar day, light changes its position by $$ \delta x = 86,400 \times 299,792,458 / 40,000,000\,\,{\rm circumferences\,\,of\,\,the\,\,Earth}.$$ The number above is approximately 647,552. Try it. Instruct light to orbit along the surface of the Earth and you will find out that in between two noons, it will complete 647,552 orbits. Why it is exactly this number? Well, it was because how the Earth was created and evolved.

If it didn't hit a big rock called Megapluto about 4,701,234,567.31415926 years ago, it would have been a few percent larger and it would be rotating with a frequency smaller by 1.734546346 percent, so 647,552 would be replaced by 648,243.25246 - but because we hit Megapluto, the ratio eventually became what I said.

(There were about a million of similarly important big events that I skip, too.)

The Earth's size and speed of spinning were OK for a while but they're not really regular or accurate so people ultimately switched to wavelengths and durations of some electromagnetic waves emitted by atoms. Spectroscopy remains the most accurate way in which we can measure time and distances. They chose the new meter and the new second as a multiple of the wavelength or periodicity of the photons emitted by various atoms - so that the new meter and the new second agreed with the old ones - those defined from the circumference of the Earth and from the solar day - within the available accuracy.

For some time, people would use two independent electromagnetic waves to define 1 meter and 1 second. In those units, they could have measured the speed of light and find out that it was 299,792,458 plus minus 1.2 meters per second or so. (The accuracy was not that great for years, but the error of 1.2 meters was the final accuracy achieved in the early 1980s.)

Because the speed of light is so fundamental - adult physicists use units in which $c=1$, anyway - physicists decided in the 1980s to redefine the units so that both 1 meter and 1 second use the same type of electromagnetic wave to be defined. 1 meter was defined as 1/299,792,458 of light seconds which, once again, agreed with the previous definition based on two different electromagnetic waves within the accuracy.

The advantage is that the speed of light is known to be accurately, by definition, today. Up to the inconvenient numerical factor of 299,792,458 - which is otherwise convenient to communicate with ordinary and not so ordinary people, especially those who have been trained to use meters and seconds based on the solar day and meridian - it is as robust as the $c=1$ units.

The point is that **luminous intensity is intensity as perceived by the human eye**, and particularly taking into account the fact that the same amount of power will be perceived as brighter or dimmer depending on whether the wavelength is at a maximum of the eye's sensibility or at a minimum.

This makes the candela ever so slightly washier than the other six SI base units, because you need to rely on physiological measurements of the average human, whoever that is, but the simple fact is that you cannot measure how bright a light looks to a human eye using just stopwatches, yardsticks and weights - you need to use the eye itself as a measurement device, or at the very least calibrate with one. Luminous intensity is a measure of the biological response of a specific system (if not of the subjective perception this response causes), and unless you're willing to define the unit of luminous intensity in terms of neuron activity on the optical nerve then you really do need a unit of measurement that's independent of the mks triplet.

In general, suppose your eye receives light from a source with a radiant intensity of
$$I_\mathrm{r.i.}=(I_\mathrm{r.i.})\:\mathrm{W\:sr^{-1}m^{-2}},$$
i.e. each unit area $A$, channelling a pencil of radiation of solid angle $\Omega$ receives $I_\mathrm{r.i.}A\Omega$ joules of radiated energy per second. This still doesn't tell you how bright the source will look to you, though, because the different receptors are more or less sensitive to radiation at different wavelengths. However, this dependence has been quite thoroughly studied using a number of methods, which have resulted in a fairly standard **luminosity function**: that is, a function
$$\bar{y}(\lambda),\text{ also denoted }V(\lambda),$$
that tells you how bright lights look at different wavelengths. Thus if $\bar{y}(\lambda_1)/\bar{y}(\lambda_2)=2$ then a light at $\lambda_1$ will look twice as bright as a lamp of the same radiant intensity at $\lambda_2$.

There is of course some individual variation, as well as the tough metrological problem of measuring and standardizing the luminosity function, and controlling for the fact that different populations can have different average responses, but that's all in the game and it's ultimately someone else's (i.e. not a physicist's) business. And if you want to really go down the rabbit hole, you need to control for the fact that the perceived intensity will vary under well-lit versus low-light conditions, and down and down it goes. However, you only need to normalize once for each set of conditions; hence the value of standard candles.

The candela comes in, essentially, as the units of the standard luminosity function. A light source of wavelength $\lambda$ and radiant intensity $I_\mathrm{r.i.}$ will have a (perceived) luminous intensity $$I_\mathrm{l.i.}=\bar{y}(\lambda)I_\mathrm{r.i.},$$ where now the luminous intensity $I_\mathrm{l.i.}$ is a completely different beast to the radiant intensity, depending as it does on a human reaction, so it is measured in candela. It follows, then, that the luminosity function has units of $\mathrm{cd}/(\mathrm{W\:sr^{-1}m^{-2}})$, and the role of the SI definition is to normalize it such that $$\bar{y}(555\:\mathrm{nm})=\frac{1\:\mathrm{cd}}{\mathrm{W\:sr^{-1}m^{-2}}}.$$ From here one can then fill out the rest of the curve for $\bar{y}(\lambda)$ using only comparative measurements of (perceived) luminous intensity, which are much easier.

When measuring the "brightness" of a light source, there is a huge number of different quantities of interest, each with their own unit but a very similar name to the others, and depending on exact details like whether you're integrating over angle, or surface, or wavelength, or any combination thereof. It is very easy, as a physicist, to simply give up and reckon that "luminous intensity" will simply be one of the list. Similarly, it's easy to simply slide over terms like photometry and not realize that it's very different to radiometry.

That just means, though, that we need to up our game a bit and realize that there's an extra dimension at play here - the subjective sensation of brightness as perceived by the human eye as a measuring device - that we need to include on an equal footing to our clocks, meter sticks and (soon to be) watt balances, if we really want to produce measurements which are useful in a world inhabited by humans.

## Best Answer

## Because it would have been incredibly expensive.

The current definition of the meter, based on a fixed value of the speed of light, was adopted on 1983, and it replaced the 1960 definition which was based on the wavelength of a krypton emission line. In essence, the light-based precision length metrology methods had become so accurate that the main source of uncertainty in length measurements was the uncertainty in the speed of light. That is, the speed of light was

alreadydetermined to be $299\,792\,458\:\rm m/s$, with an uncertainty on the $\pm 1\:\rm m/s$ range and with a large established body of measurements that used it to that precision.Now, if you're doing a change to a fixed speed of light, it is indeed tempting to change the number from $299\,792\,458$ to a nice round $300\,000\,000$, since after all they're very close - the ratio $$ \frac{300\,000\,000}{299\,792\,458} \approx 1.00069 $$ is pretty close to unity. So, why didn't we? In short, because the ratio $$ \frac{300\,000\,000}{299\,792\,458} \approx 1.00069 $$ is

notclose to unity at all. The two definitions differ by $7$ parts in $10^{4}$ (just short of 0.1%), and that means thatevery length measurement that involved more than three significant figures would've had to be re-calibrated, both in pure science as well as in industry. This would have required an enormous effort to re-write a huge fraction of the scientific and engineering literature (including technical manuals and software implementations), as well as actual physical changes to hardware to return their measurements back to round numbers (i.e. if you manufactured $5\:\rm mm$-long bolts to a $10^{-3}$ relative tolerance, then you would need to change your standards or maintain off-by-0.1% non-round-number lengths for your parts).The change would also have similarly affected all measurements of quantities (like force, energy, pressure, and all of electrical metrology) with a nonzero length dimension that involve more than three significant figures. All told, you're talking about a significant fraction (more than half?) of all measurements.

The role of metrology is to provide a common currency in measurements that can be used for science, engineering, industry and commerce, and to be as transparent as possible. Changing standards is extremely expensive (just ask the countries that switched from imperial to metric, or the ones that haven't because it's too onerous) and the gains need to be clearly worthwhile. Rounding out the speed of light does not come anywhere close to meeting that standard.