[Physics] How do we know that radioactive decay rates are constant over billions of years

half-liferadioactivitystatistics

A friend and I recently discussed the idea that radioactive decay rates are constant over geological times, something upon which dating methods are based.

A large number of experiments seem to have shown that decay rate is largely uninfluenced by the environment (temperature, solar activity, etc.). But how do we know that decay rates are constant over billions of years? What if some property of the universe has remained the same over the one hundred years since radioactivity was discovered and measured, but was different one billion years ago?

An unsourced statement on the Wikipedia page on radioactive decay reads:

[A]strophysical observations of the luminosity decays of distant
supernovae (which occurred far away so the light has taken a great
deal of time to reach us) strongly indicate that
unperturbed decay rates have been constant.

Is this true?

I'm interested in verifying constancy of decay rates over very long periods of time (millions and billions of years). Specifically, I'm not interested in radiocarbon dating or other methods for dating things in the thousands-of-years range. Radiocarbon dates, used for dating organic material younger than 50,000 years, are calibrated and crossed-checked with non-radioactive data such as tree rings of millennial trees and similarly countable yearly deposits in marine varves, a method of verification that I find convincing and that I am here not challenging.

Best Answer

Not an answer to your exact question but still so very related that I think it deserves to be mentioned: the Oklo natural nuclear reactor, discovered in 1972 in Gabon (West Africa). Self-sustaining nuclear fission reactions took place there 1.8 billion years ago. Physicists quickly understood how they could use this as a very precise probe into neutron capture cross sections that far back. Actually, a re-analysis of the data [1] has been published in 2006 featuring one of the author of the original papers in the 70's. The idea is that neutron capture is greatly augmented when neutron energy gets close to a resonance of the capturing nucleus. Thus even a slight shift of those resonance energies would have resulted in a dramatically different outcome (a different mix of chemical compounds in the reactor). The conclusion of the paper is that those resonances did not change by more than 0.1 eV.

It should be noted that the most interesting outcome from the point of view of theoretical physics is that this potential shift can be related to a potential change of the fine-structure constant $\alpha$. The paper concludes that

$$−5.6 \times 10^{−8} < \frac{\delta\alpha}{\alpha} < 6.6 \times 10^{−8}$$

[1] Yu. V. Petrov, A. I. Nazarov, M. S. Onegin, V. Yu. Petrov, and E. G. Sakhnovsky, Natural nuclear reactor at oklo and variation of fundamental constants: computation of neutronics of a fresh core, Phys. Rev. C 74 (2006), 064610. https://journals.aps.org/prc/abstract/10.1103/PhysRevC.74.064610

Related Question