I know the spent fuel is still radioactive. But it has to be more stable than what was put in and thus safer than the uranium that we started with. That is to say, is storage of the waste such a big deal? If I mine the uranium, use it, and then bury the waste back in the mine (or any other hole) should I encounter any problems? Am I not doing the inhabitants of that area a favor as they will have less radiation to deal with than before?
Nuclear Physics – Why is Nuclear Waste More Dangerous Than the Original Nuclear Fuel?
nuclear-engineeringnuclear-physicsradiation
Related Solutions
Mart's answer gets to some of the problem (e.g. ensuring that storage remains stable for the period of decay) but I believe the other answers here are a bit off base.
Nuclear fuel is determined to be "spent" when the nuclear engineers determine it is no longer economically warranted to continue using it. On a physical sense, the fuel could certainly be used for much longer. Besides, it depends on what kind of reactor you are talking about as to how long the fuel is used and what the composition of the spent fuel is. The plutonium that is such a dangerous part of most used fuel is actually a major contributor to the energy output of a CANDU (Canadian heavy water) reactor toward the end of the useful fuel life.
As for composition of waste, the majority is composed of rather inert U238 (~91%) that did not transmute through neutron capture or fission. This material is part of the original fuel composition and is not harmful. A small percentage (~1%) consists of the remaining U235 that did not transmute or fission. About the same amount is plutonium that results from neutron capture by U238 and subsequent decay. Depending on reactor operation, about 4% is daughter products and the rest is actinides and activation products.
The half-lives of these isotopes varies significantly but it is a convenient fact that the more radioactive a material is, the shorter its halflife and consequently the shorter time before it is "safe." There are many graphs (e.g. see second graph) out there of decay times for spent fuel, but as I said above, the exact time for decay depends on a lot of things like the original composition of the fuel, what kind of reactor was used, and the final processing of the used fuel.
Several options for processing spent fuel exist including recycling it to retrieve the useable uranium and plutonium. Doing so reduces the waste volume considerably but also necessitates the development of separations technologies and the handling of concentrated waste. It is also possible to place the waste in special reactors that are dedicated to "burning" the waste with high neutron flux; even still, there will always be some waste. Waste that is slated for disposal is often vitrified, that is, it is mixed with borated glass. These glass logs are put into steel containers and then stored in concrete.
Whatever is done with the waste, we must be confident in the stability of the storage for at least several hundred years (though thousands of years in some cases). A great deal of research continues on this subject. That being said, the absolute amount of waste is quite small. I've read various numbers, but for order of magnitude we are talking about one football field 20 feet deep of waste for all of the nuclear reactors in the United States for the last 60 years. That is a lot of bad stuff, but in comparison, it seems quite manageable. If we recycled the fuel, that would reduce to about 6 inches of waste spread over one foodball field. Coal plants, in general produce on the order of 10,000 times more waste by volume which contains more radioactive material in absolute terms than nuclear plant waste.
The relevant physics is this:
Th-232 (non-fissile) can be bred into fissile U-233. Because of side reactions, some U-232 always appears in the creation of U-233. For those wishing to make nuclear bombs, two problems emerge:
1) U-233 is very tricky to detonate.
2) U-232 rapidly decays down to Tl-208, a strong gamma emitter
Strong gamma rays are not just deadly to engineers, making it difficult to fashion, but gammas are also deadly to electronics and semi-stable materials one might put into a bomb. Moreover, the gammas' precise energy and penetrating power announce and identify their source from distance.
So why are bomb-making consideration relevant to nuclear energy? You asked why governments do not research thorium more. Historically, governments have invested more time and money seeking bombs rather than energy. With better alternatives out there, nobody wants to develop unreliable bombs with a shelf life measured in weeks. Consequently, thorium research was cut short back during the cold-war. It is only recently receiving some renewed interest for commercial energy production, but it now must overcome decades worth of cumulative bureaucratic inertia and paid-for uranium industry that are well beyond the scope of this forum.
Best Answer
Typical nuclear power reactions begin with a mixture of uranium-235 (fissionable, with a half-life of 700 Myr) and uranium-238 (more common, less fissionable, half-life 4 Gyr) and operate until some modest fraction, 1%-5%, of the fuel has been expended. There are two classes of nuclides produced in the fission reactions:
Fission products, which tend to have 30-60 protons in each nucleus. These include emitters like strontium-90 (about 30 years), iodine-131 (about a week), cesium-137 (also about 30 years). These are the main things you hear about in fallout when waste is somehow released into the atmosphere.
For instance, after the Chernobyl disaster, radioactive iodine-131 from the fallout was concentrated in people's thyroid glands using the same mechanisms as the usual concentration natural iodine, leading to acute and localized radiation doses in that organ. Strontium behaves chemically very much like calcium, and there was a period after Chernobyl when milk from dairies in Eastern Europe was discarded due to high strontium content. (Some Norwegian reindeer are still inedible.)
Activation products. The reactors operate by producing lots of free neutrons, which typically are captured on some nearby nucleus before they decay. For most elements, if the nucleus with $N$ neutrons is stable, the nucleus with $N+1$ neutrons is radioactive and will decay after some (possibly long) time. For instance, neutron capture on natural cobalt-59 in steel alloys produces cobalt-60 (half-life of about five years); Co-60 is also produced from multiple neutron captures on iron.
In particular, a series of neutron captures and beta decays, starting from uranium, can produce plutonium-239 (half-life 24 kyr) and plutonium-240 (6 kyr).
What sometimes causes confusion is the role played by the half-life in determining the decay rate. If I have $N$ radionuclides, and the average time before an individual nuclide decays is $T$, then the "activity" of my sample is $$ \text{activity, } A= \frac NT. $$
So suppose for the sake of argument that I took some number $N_\mathrm{U}$ of U-238 atoms and fissioned them into $2N_\mathrm{U}$ atoms of cobalt-60. I've changed by population size by a factor of two, but I've changed the decay rate by a factor of a billion.
The ratio of the half-lives $T_\text{U-238} / T_\text{Pu-240}$ is roughly a factor of a million. So if a typical fuel cycle turns 0.1% of the initial U-238 into Pu-240, the fuel leaves the reactor roughly a thousand times more radioactive than it went in --- and will remain so for thousands of years.