[Physics] Why is matter-antimatter asymmetry surprising, if asymmetry can be generated by a random walk in which particles go into black holes

antimatterbaryogenesisblack-holeshawking-radiation

My understanding is the early universe was a very "hot" (ie energy dense) environment. It was even hot enough for black holes to form from photons.

My second point of understanding is that black holes can lose mass due to hawking radiation, which amounts to:

Physical insight into the process may be gained by imagining that
particle–antiparticle radiation is emitted from just beyond the event
horizon. This radiation does not come directly from the black hole
itself, but rather is a result of virtual particles being "boosted" by
the black hole's gravitation into becoming real particles.[citation
needed] As the particle–antiparticle pair was produced by the black
hole's gravitational energy, the escape of one of the particles lowers
the mass of the black hole.3

An alternative view of the process is that vacuum fluctuations cause a
particle–antiparticle pair to appear close to the event horizon of a
black hole. One of the pair falls into the black hole while the other
escapes. In order to preserve total energy, the particle that fell
into the black hole must have had a negative energy (with respect to
an observer far away from the black hole). This causes the black hole
to lose mass, and, to an outside observer, it would appear that the
black hole has just emitted a particle. In another model, the process
is a quantum tunnelling effect, whereby particle–antiparticle pairs
will form from the vacuum, and one will tunnel outside the event
horizon.

So I simulated a scenario with two types of particles that are created in a 50/50 ratio from hawking radiation, and always annihilate each other if possible.

Edit:

In this simulation both particles are created, but one gets sucked
into the black hole. The other stays outside. So the charge should be
conserved.

The simulation (written in R) is here:

# Run the simulation for 1 million steps and initialize output matrix
n_steps = 1e6
res     = matrix(ncol = 2, nrow = n_steps)

# Initiate number of particles to zero
n0 = n1 = 0
for(i in 1:n_steps){
  # Generate a new particle with 50/50 chance of matter/antimatter
  x = sample(0:1, 1)

  # If "x" is a matter particle then...
  if(x == 0){
    # If an antimatter particle exists, then annihilate it with the new matter particle. 
    #Otherwise increase the number of matter particles by one
    if(n1 > 0){
      n1 = n1 - 1
    }else{
      n0 = n0 + 1
    }
  }

  # If "x" is an antimatter particle then...
  if(x == 1){
    # If a matter particle exists, then annihilate it with the new antimatter particle. 
    # Otherwise increase the number of antimatter particles by one
    if(n0 > 0){
      n0 = n0 - 1
    }else{
      n1 = n1 + 1
    }
  }

  # Save the results and plot them if "i" is a multiple of 1000
  res[i, ] = c(n0, n1)
  if(i %% 1000 == 0){
    plot(res[1:i, 1], ylim = range(res[1:i, ]), type = "l", lwd = 3, panel.first = grid())
    lines(res[1:i, 2], col = "Red", lwd = 3)
  }
}

Here is a snapshot of the results, where the black line is the number of "type 0" particles and the red line is the number of "type 1" particles:
enter image description here

Obviously this is a simplified 1d model where any generated anti-matter is immediately annihilated by a corresponding particle of matter, etc. However, I do not see why the qualitative result of a dominant particle "species" would not be expected to hold in general. So what is the basis for expecting equal amounts of matter and antimatter? How is it in conflict with this simple simulation?

EDIT:

As requested in the comments I modified the simulation to allow different initial number of particles and the probability of generating each particle.

# Run the simulation for 1 million steps and initialize output matrix
n_steps = 250e3
res     = matrix(ncol = 2, nrow = n_steps)

# Initial number of each type of particle and probability of generating type 0
n0 = 0
n1 = 0
p0 = 0.51
for(i in 1:n_steps){
  # Generate a new particle with 50/50 chance of matter/antimatter
  x = sample(0:1, 1, prob = c(p0, 1 - p0))

  # If "x" is a matter particle then...
  if(x == 0){
    # If an antimatter particle exists, then annihilate it with the new matter particle. 
    # Otherwise increase the number of matter particles by one
    if(n1 > 0){
      n1 = n1 - 1
    }else{
      n0 = n0 + 1
    }
  }

  # If "x" is an antimatter particle then...
  if(x == 1){
    # If a matter particle exists, then annihilate it with the new antimatter particle. 
    # Otherwise increase the number of antimatter particles by one
    if(n0 > 0){
      n0 = n0 - 1
    }else{
      n1 = n1 + 1
    }
  }

  # Save the results and plot them if "i" is a multiple of 1000
  res[i, ] = c(n0, n1)
  if(i %% 1e4 == 0){
    plot(res[1:i, 1], ylim = range(res[1:i, ]), type = "l", lwd = 3, panel.first = grid())
    lines(res[1:i, 2], col = "Red", lwd = 3)
  }
}

Some examples:

n0 = 1000, n1 = 0, p = 0.5
enter image description here

n0 = 0, n1 = 0, p = 0.51
enter image description here

n0 = 1000, n1 = 1000, p = 0.5
enter image description here

EDIT 2:

Thanks all for your answers and comments. I learned the name for the process of generating matter from black holes is "black hole baryogenesis". However, in the papers I checked on this topic (eg Nagatani 1998, Majumdar et al 1994) do not seem to be talking about the same thing I am.

I am saying that via the dynamics of symmetric generation and annihilation of matter-antimatter along with symmetric baryogenesis via hawking radiation you will always get an imbalance over time that will tend to grow due to a positive feedback. Ie, the Sakharov conditions such as CP-violation are not actually required to get an asymmetry.

If you accept pair-production, annihilation, and hawking radiation exists, then you should by default expect one dominant species of particle to dominate over the other at all times. That is the only stable state (besides an energy-only universe). Approximately equal matter/antimatter is quite obviously very unstable because they annihilate each other, so it makes no sense to expect that.

It is possible that in some more complicated model (including more than one type of particle-pair, distance between particles, forces, etc) somehow this tendency towards asymmetry would be somehow canceled out. But I cannot think of any reason why that would be, it should be up to the people who expect matter-antimatter symmetry to come up with a mechanism to explain that (which would be an odd thing to spend your time on since that is decidedly not what we observe in our universe).

Regarding some specific issues people had:

1) Concerns about negative charge accumulating in the black holes and positive charge accumulating in the regular space

  • While in the simulation there is only one particle, in practice this would be happening in parallel for electron-positrons and proton-antiproton pairs at (afaik) equal rates. So I would not expect any kind of charge imbalance. You can imagine particle pairs in the simulation are half electron-positrons and half proton-antiprotons.

2) There were not enough black holes in the early universe to explain the asymmetry

  • I tried and failed to get an exact quote for this so I could figure out what assumptions were made, but I doubt they included the positive feedback shown by the simulation in their analysis. Also, I wondered if they considered the possibility of kugelblitz black holes forming in an energy-only universe. Finally, the tendency towards a dominant species is ongoing all the time, it need not to have happened in the early universe anyway.

3) If this process is ongoing in a universe that looks like ours today (where it may take a long time for a particle to travel from one black hole to the other), we would expect some black holes to locally happen to generate antimatter dominated regions and others to generate matter dominated regions. Eventually some of these regions should come into contact with each other leading to an observable mass annihilation of particles.

  • I agree this would be the default expectation, but If you start from a highly matter-dominated state it would be very unlikely for enough antimatter to be generated to locally annihilate all the matter and even then there is only a 50% chance the next phase is antimatter. Putting numbers on stuff like this would require a more complex model that I don't wish to attempt here.

4) Asymmetry is not actually considered surprising by physicists.

  • Well, it says this on wikipedia:

    Neither the standard model of particle physics, nor the theory of
    general relativity provides a known explanation for why this should be
    so, and it is a natural assumption that the universe be neutral with
    all conserved charges. […] As remarked in a 2012 research paper,
    "The origin of matter remains one of the great mysteries in physics."

5) This process is somehow an exotic "alternative" theory to the standard.

  • This process was deduced by accepting standard physics/cosmology to be correct. It is a straightforward consequence of the interplay between pair production/annihilation and hawking radiation. It may seem counterintuitive to people used to thinking about what we would expect on average from a model, when actually we want to think about how the individual instances behave. If the simulation is run multiple times and add up all the "particles" the result will be ~50/50 matter/antimatter. However, we observe one particular universe not an average of all possible universes. In each particular instance there is always a dominating species of particle, which we end up calling "matter".

So, after reading the answers/comments I think the answer to my question is probably that physicists were thinking of what they would expect on average when they should have been thinking about what would happen in specific instances. But I'm not familiar enough with the literature to say.

Edit 3:

After talking with Chris in the chat I decided to make the rate of annihilation dependent on the number of particles in the universe. I did this by setting the probability of annihilation to exp(-100/n_part), where n_part is the number of particles. This was pretty arbitrary, I chose it to have decent coverage over the whole typical range for 250k steps. It looks like this:
enter image description here

Here is the code (I also added some parallelization, sorry for the increased complexity):

require(doParallel)

# Number of simulations to run and threads to use in parallel
n_sim   = 100
n_cores = 30

# Initial number of each type of particle and probability
n0 = 0
n1 = 0
p0 = 0.5

registerDoParallel(cores = n_cores)
out = foreach(sim = 1:n_sim) %dopar% {
  # Run the simulation for 250k steps and initialize output matrix
  n_steps = 250e3
  res     = matrix(ncol = 2, nrow = n_steps)

  for(i in 1:n_steps){
    # Generate a new particle with 50/50 chance of matter/antimatter
    x = sample(0:1, 1, prob = c(p0, 1 - p0))

    n_part = sum(res[i -1, ]) + 1
    p_ann  = exp(-100/n_part)
    flag   = sample(0:1, 1, prob = c(1 - p_ann, p_ann))


    # If "x" is a matter particle then...
    if(x == 0){
      # If an antimatter particle exists, then annihilate it with the new matter particle. 
      # Otherwise increase the number of matter particles by one
      if(n1 > 0 & flag){
        n1 = n1 - 1
      }else{
        n0 = n0 + 1
      }
    }

    # If "x" is an antimatter particle then...
    if(x == 1){
      # If a matter particle exists, then annihilate it with the new antimatter particle. 
      # Otherwise increase the number of antimatter particles by one
      if(n0 > 0 & flag){
        n0 = n0 - 1
      }else{
        n1 = n1 + 1
      }
    }

    # Save the results and plot them if "i" is a multiple of 1000
    res[i, ] = c(n0, n1)
    if(i %% 1e4 == 0 && sim %in% seq(1, n_sim, by = n_cores)){
      # plot(res[1:i, 1], ylim = range(res[1:i, ]), type = "l", lwd = 3, panel.first = grid())
      # lines(res[1:i, 2], col = "Red", lwd = 3)
      print(paste0(sim, ": ", i))
    }
  }
  return(res)
}

Here is an example of 25 results:
enter image description here

And a histogram of the percent of particles that were in the minor class by the end of each simulation:
enter image description here

So the results still agree with the simpler model in that such systems will tend to have a dominant species of particle.

Edit 4:

After further helpful conversation with chris he suggested that annihilation of more than one particle pair per step was the crucial added factor. Specifically that the number of removed particles should be a sample from the Poisson distribution with a mean proportional to the total number of particles, ie rpois(1, m*n0*n1) where m is small enough so that annihilation are very rare until a large number of matter and antimatter particles exist.

Here is the code (which is quite different from earlier):

require(doParallel)

# Number of simulations to run and threads to use in parallel
n_sim   = 100
n_cores = 30

# Initial number of each type of particle and probability
n0 = 0
n1 = 0
p0 = 0.5
m  = 10^-4

# Run the simulation for 250k steps and 
n_steps = 250e3

registerDoParallel(cores = n_cores)
out = foreach(sim = 1:n_sim) %dopar% {
  # Initialize output matrix
  res = matrix(ncol = 3, nrow = n_steps)

  for(i in 1:n_steps){
    # Generate a new particle with 50/50 chance of matter/antimatter
    x = sample(0:1, 1, prob = c(p0, 1 - p0))

    # If "x" is a matter particle then...
    if(x == 0){
      n0 = n0 + 1
    }

    # If "x" is an antimatter particle then...
    if(x == 1){
      n1 = n1 + 1
    }

    # Delete number of particles proportional to the product of n0*n1
    n_del = rpois(1, m*n0*n1)
    n0 = max(0, n0 - n_del)
    n1 = max(0, n1 - n_del)

    # Save the results and plot them if "i" is a multiple of 1000
    res[i, 1:2] = c(n0, n1)
    res[i, 3]   = min(res[i, 1:2])/sum(res[i, 1:2])
    if(i %% 1e4 == 0 && sim %in% seq(1, n_sim, by = n_cores)){
      # plot(res[1:i, 1], ylim = range(res[1:i, ]), type = "l", lwd = 3, panel.first = grid())
      # lines(res[1:i, 2], col = "Red", lwd = 3)
      print(paste0(sim, ": ", i))
    }
  }
  return(res)
}

And here are the results for various values of "m" (which controls how often annihilation occurs). This plot shows the average proportion of minor particles for each step (using 100 simulations per value of m) as the blue line, the green line is the median, and the bands are +/- 1 sd from the mean:

enter image description here

The first plot has the same behavior as my simulations, and you can see that as m gets smaller (annihilation rate as a function of number of particles becomes rarer) the system tends to stay in a more symmetric state (50/50 matter/antimatter), at least for more steps.

So a key assumption made by physicists seems to be that the annihilation rate in the early universe was very low, so that enough particles could accumulate until they became common enough that neither is likely to ever get totally "wiped out".

EDIT 5:

I ran one of those Poisson simulations for 8 million steps with m = 10^-6 and you can see that it just takes longer for the dominance to play out (it looks slightly different because the 1 sigma fill wouldn't plot with so many data points):
enter image description here

So from that I conclude the very low annihilation rates just delay how long it takes, rather than resulting in a fundamentally different outcome.

Edit 6:

Same thing happens with m = 10^-7 and 28 million steps. The aggregate chart looks the same as the above m = 10^-6 with 8 million steps. So here are some individual examples. You can see a clear trend towards a dominating species just as in the original model:
enter image description here

Edit 7:

To wrap this up… I think the answer to the question ("why do physicists think this?") is clear from my conversation with Chris here. Chris does not seem interested in making that into an answer but I will accept it if someone writes similar.

Best Answer

Congratulations on finding a method for baryogenesis that works! Indeed, it's true that if you have a bunch of black holes, then by random chance you'll get an imbalance. And this imbalance will remain even after the black holes evaporate, because the result of the evaporation doesn't depend on the overall baryon number that went into the black hole.

Black holes can break conservation laws like that. The only conservation laws they can't break are the ones where you can measure the conserved quantity from outside. For example, charge is still conserved because you can keep track of the charge of the black hole by measuring its electric field. In the Standard Model, baryon number has no such associated field.

Also, you need to assume that enough black holes form to make your mechanism work. In the standard models, this doesn't happen, despite the high temperatures. If you start with a standard Big Bang, the universe expands too fast for black holes to form.


However, in physics, finding a mechanism that solves a problem isn't the end -- it's the beginning. We aren't all sitting around scratching our heads for any mechanism to achieve baryogenesis. There are actually at least ten known, conceptually distinct ways to do it (including yours), fleshed out in hundreds of concrete models. The problem is that all of them require speculative new physics, additions to the core models that we have already experimentally verified. Nobody can declare that a specific one of these models is true, in the absence of any independent evidence.

It's kind of like we're all sitting around trying to find the six-digit password for a safe. If you walk by and say "well, obviously it could be 927583", without any further evidence, that's technically true. But you have not cracked the safe. The problem of baryogenesis isn't analogous to coming up with any six-digit number, that's easy. The problem is that we don't know which one is relevant, which mechanism actually exists in our universe.

What physicists investigating these questions actually do involves trying to link these models to things we can measure, or coming up with simple models that explain multiple puzzles at once. For example, one way to test a model with primordial black holes is to compute the amount heavy enough to live until the present day, in which case you can go looking for them. Or, if they were created by some new physics, you could look for that new physics. Yet another strand is to note that if enough primordial black holes still are around today, they could be the dark matter, so you could try to get both baryogenesis and dark matter right simultaneously. All of this involves a lot of reading, math, and simulation.

Related Question