The question: "I'm wondering if there is some good reason why the universe as we know it has to have twelve particles rather than just four."
The short answer: Our current standard description of the spin-1/2 property of the elementary particles is incomplete. A more complete theory would require that these particles arrive in 3 generations.
The medium answer: The spin-1/2 of the elementary fermions is an emergent property. The more fundamental spin property acts like position in that the Heisenberg uncertainty principle applies to consecutive measurements of the fundamental spin the same way the HUP applies to position measurements. This fundamental spin is invisible to us because it is renormalized away. What's left is three generations of the particle, each with the usual spin-1/2.
When a particle moves through positions it does so by way of an interaction between position and momentum. These are complementary variables. The equivalent concept for spin-1/2 is "Mutually unbiased bases" or MUBs. There are only (at most) three MUBs for spin-1/2. Letting a particle's spin move among them means that the number of degrees of freedom of the particle have tripled. So when you find the long time propagators over that Hopf algebra you end up with three times the usual number of particles. Hence there are three generations.
The long answer: The two (more or less classical) things we can theoretically measure for a spin-1/2 particle are its position and its spin. If we measure its spin, the spin is then forced into an eigenstate of spin so that measuring it again gives the same result. That is, a measurement of spin causes the spin to be determined. On the other hand, if we measure its position, then by the Heisenberg uncertainty principle, we will cause an unknown change to its momentum. The change in momentum makes it impossible for us to predict the result of a subsequent position measurement.
As quantum physicists, we long ago grew accustomed to this bizarre behavior. But imagine that nature is parsimonious with her underlying machinery. If so, we'd expect the fundamental (i.e. before renormalization) measurements of a spin-1/2 particle's position and spin to be similar. For such a theory to work, one must show that after renormalization, one obtains the usual spin-1/2.
A possible solution to this conundrum is given in the paper:
Found.Phys.40:1681-1699,(2010), Carl Brannen, Spin Path Integrals and Generations
http://arxiv.org/abs/1006.3114
The paper is a straightforward QFT resummation calculation. It assumes a strange (to us) spin-1/2 where measurements act like the not so strange position measurements. It resums the propagators for the theory and finds that the strange behavior disappears over long times. The long time propagators are equivalent to the usual spin-1/2. Furthermore, they appear in three generations. And it shows that the long time propagators have a form that matches the mysterious lepton mass formulas of Yoshio Koide.
Peer review: The paper was peer-reviewed through an arduous process of three reviewers. As with any journal article it had a managing editor, and a chief editor. Complaints about the physics have already been made by competent physicists who took the trouble of carefully reading the paper. It's unlikely that someone making a quick read of the paper is going to find something that hasn't already been argued through. The paper was selected by the chief editor of Found. Phys. as suitable for publication in that journal and so published last year.
The chief editor of Found. Phys. is now Gerard 't Hooft. His attitude on publishing junk is quite clear, he writes
How to become a bad theoretical physicist
On your way towards becoming a bad
theoretician, take your own immature
theory, stop checking it for mistakes,
don't listen to colleagues who do spot
weaknesses, and start admiring your
own infallible intelligence. Try to
overshout all your critics, and have
your work published anyway. If the
well-established science media refuse
to publish your work, start your own
publishing company and edit your own
books. If you are really clever you
can find yourself a formerly
professional physics journal where the
chief editor is asleep.
http://www.phys.uu.nl/~thooft/theoristbad.html
One hopes that 't Hooft wasn't asleep when he allowed this paper to be published.
Extensions: My next paper on the subject extends the above calculation to obtain the weak hypercharge and weak isospin quantum numbers. It uses methods similar to the above, that is, the calculation of long time propagators, but uses a more sophisticated method of manipulating the Feynman diagrams called "Hopf algebra" or "quantum algebra". I'm figuring on sending it in to the same journal. It's close to getting finished, I basically need to reread it over and over and add references:
http://brannenworks.com/E8/HopfWeakQNs.pdf
Annihilation is defined as the collision of a particle and its antiparticle resulting in the destruction of both. The conversion products do not have to be photons, but usually are because probability of products created is inversely related to mass, and photons are massless. In colliders like the LHC, this can be compensated for by smashing matter at higher and higher velocities in the hopes that matter-antimatter pairs coming out of the remnants have enough kinetic energy to then create particles more interesting than photons.
In two photon interactions, the photon coupling causes a fermion-anti fermion pair, such as electron-positron pairs as exploited in in Positron Emission Tomography(PET). The resulting annihilation is thus not a direct result of the photon coupling and is a distinct event.
Best Answer
Electron positron annihilations can give mu and tau neutrinos as well as electron neutrinos. For a calculation of the probabilities see for example Mu and Tau Neutrino Thermalization and Production in Supernovae: Processes and Timescales.
You might also be interested to read DavidZ's answer to Why does electron-positron annihilation prefer to emit photons?.