Is Gibbs/Boltzmann probability the ‘true’ probability of a particle being in a particular state in the canonical ensemble

probabilitysoft-questionstatistical mechanicsthermodynamics

Based on the classical interpretation of probability, the probability for a single particle to be in the $i$th energy state, in an $N$ particle system, should be given by the number of particles in that state, divided by the total number of particles :

$$p(i)= \frac{n_i}{N}$$

Here $n_i$ represents the actual number of particles in the system. However, due to random fluctuations and collisons, the actual number of particles in a particular energy level is never constant and keeps on changing if the total number of particles is finite. By that logic, the 'true' probability of finding a particle at a certain energy level, should be impossible to determine, since we can't sensibly talk about the number of particles in a state, for a finite case anyway.

Hence we use the gibbs/boltzmann distribution, and claim, that the probability of a particle to be in a certain state is given by the following :

$$p(i)=\frac{e^{-\beta E_i}}{Z}$$

However, this is not the exact true probability, is it ? Isn't it technically more like, our best guess of what the true probability of a particle being in state $E_i$ should be ? Since the number of particles in each state keeps on changing, it becomes non-sensical to talk about this 'true' probability of finding the particle in a certain energy state at a time.

So, would it be correct in assuming that the gibbs probability is the average probability or the 'expected' probability of finding a particle in a particular state. Since this is the average probability and not the true one, it becomes impossible to find out the number of particles in that state. Because of this, the number of particles in a state becomes a random variable with a distribution, whose mean is given by $Np(i)$.

So, can we say that in the truest sense of classical probability, the boltzmann probability is average probability of finding a particle in a certain state in the system, only because the true probability keeps on changing as the system undergoes collisions and what not, and the occupancy of a state is never constant ?

In the infinite particle limit, the fluctuations die out, and the actual number of particles become close to the expected number of particles, and so one can claim that the Gibbs probability is approximately equal to the classical probability.

If we could theoretically know the 'true' probability of finding a particle at a certain state, which is impossible ofcourse, we could find the exact number of particles in that state. In that case, the number of particles in the state wouldn't be a distribution, more like a yes or no question, just like picking up coloured balls from a bag – if you know the probability of picking a blue ball from a bag of $100$ balls, you could easily find the number of blue balls. It would be exactly probability multiplied by total number of balls and not some distribution of various possibilities.

But in this case, since you don't know the exact probability, and as the number of particles in the state keep on changing, you can only talk about the expectated number of particles in a state i.e you get a distribution of the total number of particles in a state.

I'm sorry if I'm spending too much time forcing an interpretation of a rather simple problem, but can anyone tell me if my interpretation of the situation correct or not ?

Best Answer

Your idea makes sense, but I believe it would be better suited if we discussed it in terms of a Bayesian notion of probability instead of a frequentist view.

The definition of probability you presented is "frequentist", in the sense you understand probabilities in terms of the frequency in which a certain outcome will happen. Another possible view is to think of probability in terms of "bets": if I tell you I have a box filled with 120 balls and 80 of them are blue, how much would you be willing to bet on me pulling out a blue ball? If it is blue, I win, otherwise, you win. You would not want to bet more than 1:2 (i.e., if you win I pay you 2 dollars, if I win you pay me 1 dollar), because if I charge you more you are likely to lose money. Notice, however, that if you know I usually lie about the number of balls in a box, your willingness to bet 1:2 will not be the same. Instead, you'll want to bet somewhat less, because you are also taking into consideration some extra information you had beforehand. This shows that the probability is not an absolute concept, but instead something that depends on the information you have.

Let us consider as an example a gas. You want to attribute probabilities to what is the microstate the gas is currently in. How can you do that? Well, you use the Fundamental Postulate of Statistical Mechanics and state that all the available microstates are equally likely.

But suppose now you know your gas is in thermal equilibrium with a reservoir at some temperature $T$. Temperature is just a measure of the mean energy of the particles in your gas, so this is equivalent to saying you have a well-defined mean value for your energy. This time, you have more information than you used to, and you would like to use it to determine your probabilities in the very same way you used it when betting against me. Last time, you used a probability which was uniform, which we can justify by saying that we had no information on why any state should be preferred over some other. This time, we also want to pick a distribution that assumes no extra information than we have, and so we'll pick a probability that maximizes the "desinformation" (AKA entropy) of our system subject to the constraints provided by the information we do have (mean value of energy is fixed).

Will this process gives us the correct probability? I quote DOI: 10.3390/e18070247's adaptation of a quote by Jaynes to answer the question:

If, in a given context, you need to formulate a probability distribution on which to base your bets, choose, among all possible distributions that agree with what you know about the problem, the one having maximum entropy. Why? Is this guaranteed to be the "real" (whatever that may mean) probability distribution? Of course not! In fact you will most likely replace it with a new one as soon as you see the outcome of the next trial—because by then you will have one more piece of information. Why, then? Because any other choice—being tantamount to throwing away some of the information you have or assuming information you don't have—would be indefensible.

Hence, you are correct in the idea that the Boltzmann probability is the "best guess" for how many particles occupy each state given the information available. If you obtain more information, you will likely abandon the Boltzmann distribution as soon as that happens and update your probabilities accordingly, just like someone counting cards on Blackjack updates the probability of getting money with each new card they see. Notice that knowing the "true probability" would mean having complete information about the system, and as you mentioned this ends up leading to a single possibility in which you know exactly how many particles are in each state.

If these ideas interest you, you may want to take a look at this post: Reference for statistical mechanics from information theoretic view.