Apparent paradox in statistical mechanics

probabilityquantum mechanicsstatistical mechanicsthermodynamics

I can't understand why the likelihood of a particle to be in state $\epsilon_i$ in a canonical ensemble, does not depend upon the number of particles in that state.

The probability that the a single particle is in state $\epsilon_i$ is given by the gibbs/boltzmann probability $p_i=\frac{e^{-\beta\epsilon_i}}{Z}$.

This only depends upon the energy level and the temperature.

However, the classical definition of probability tells me that the probability of a particle being in a certain state, would be equal to the number of particles in that state, divided by total number of particles. Although the gibbs probability matches this description when number of particles tend to infinty due to the frequentist interpretation, I can't seem to wrap my head around the fact that for the finite case, the likelihood of a particle having a certain energy can be independent of the number of particles in that level.

Take this apparent paradox for an example :

Suppose the gibbs probability of a particle being in a state $e_i$ is given by $p_i$. This means that there is always going to be a $p_i$ chance of finding the particle in this state. However, the energy of a particle fluctuates, and the population number of a state keeps on changing due to collisions etc. However, one of the unlikely but possible microstates is $0$ particles in that state.
Suppose at time $t$ there are no particles in that state. There is a finite probability for this to happen. Even though we can never know when this happens, our intuition tells us that when this happens, the chance of finding a particle in this state must be $0$, because by definition there are no particles in this state i.e. the system is in a microstate with $0$ particles in $e_i$. But even then we have a $p_i$ percent chance of finding the particle in that state.

How can I resolve this apparent paradox ?

My idea was that, instead of treating particles like marbles in a box, where the chance of getting a blue marble is equal to total number of blue marbles divided by total number of marbles, I should treat these particles like dice. If we throw a $100$ die, the chance of getting a six in a single die, is independent of how many dice show a six on the floor. So, even if none of the $100$ dice, show a six, there still is a $1/6$ chance of any single die to show a six.

Is this the correct analogy ? Is comparing these particles with marbles in a bag absolutely wrong ? Should I instead compare them to coins or dice or something like that ? In that case, is checking a single particle to find its probability equivalent to finding the probability of picking up a blue marble from a box of marbles, or is it equivalent to checking the probability that a single die rolls a six or a single coin flipping a head ?

Best Answer

The first major thing to understand is the idea of a statistical ensemble. What one does is to replace a real physical system - say, a box of gas sitting on a table - with an imaginary ensemble consisting of a vast (or formally infinite) number of copies of the system, each in a different possible microstate, and a probability measure which assigns each microstate (or in the infinite case, each measurable set of microstates) a probability of being occupied by the real system being modeled. From there, when we talk about the statistical properties of our system, we're really talking about the statistical properties of the ensemble.

For example, in the case of the gas in the box one might ask for the probability $P$ that >51% of the gas particles are in the left half of the box. To compute this, we would first assign each microstate $\mu$ a probability $p_\mu$ of being occupied; from there, we would take the sum of all the $p_\mu$'s corresponding to microstates which satisfy the >51% condition. If, for example, all of the microstates are equally likely, then $P$ is just the number of microstates satisfying the >51% condition divided by the total number of microstates.

When we turn our attention back to the box of gas on the table in front of us, we could interpret this $P$ in two different ways:

  1. It tells us that if we measure the box $N$ times, we should expect it to satisfy the >51% condition $PN$ times (see frequentism).
  2. It gives us a measure of the confidence we should have that, if we measure the box once, it will satisfy the >51% condition (see Bayesianism).

The probability that the a single particle is in state $\epsilon_i$ is given by the gibbs/boltzmann probability $P = e^{-\beta \epsilon_i}/Z$

This is a misunderstanding in general, but holds under certain conditions. The probability that a system - which is in thermal contact with a reservoir with temperature $T$ - is in a microstate with energy $\epsilon_i$ is given by this expression, where the partition function is the sum over microstates $\mu$: $$Z = \sum_{\mu\in\text{microstates}} e^{-\beta \epsilon_\mu}$$ Now, let's say that the system consists of $N$ non-interacting, identical particles, each of which may inhabit one of a set of states $\sigma$ with energies $\mathcal E_\sigma$. Since the energy of the full system microstate $\epsilon_n$ is just the sum of all of the single particle energies, then we can rearrange the sum to yield $$Z = \sum_\mu e^{-\beta \epsilon_\mu} = \prod_{i\in\text{particles}}\sum_{\sigma\in\text{states}} e^{-\beta \mathcal E_\sigma}/N! = Z_1^N/N!$$ where $Z_1 = \sum_\sigma e^{-\beta \mathcal E_\sigma}$ is the "single particle" partition function, and the factor of $N!$ has been inserted to resolve the Gibbs paradox. In essence, we are viewing a single particle as a system in its own right.

If we do this, then your expression works - the probability that a state $\sigma$ with energy $\mathcal E_\sigma$ is occupied by some chosen particle is indeed $e^{-\beta E_\sigma}/Z_1$. The reason this is independent of all of the other particles in the system is because we have assumed that the particles don't interact; as a result, no particles know what the others are doing, and their single-particle probability distributions are independent of one another.

Even though we can never know when this happens, our intuition tells us that when this happens, the chance of finding a particle in this state must be 0, because by definition there are no particles in this state i.e. the system is in a microstate with 0 particles in $e_i$. But even then we have a $p_i$ percent chance of finding the particle in that state. How can I resolve this apparent paradox?

You are mixing up the system with the ensemble you're using to model it. The probabilities aren't computed from the real system in front of you on the table; they're computed from an ensemble of identical systems all in different microstates. This probability can be interpreted either in the frequentist or Bayesian sense, but in either case it is not a statement about the actual state of the actual system on your table.