Statistical Mechanics – How to Address Doubts on the Derivation of the Equipartition Theorem

statistical mechanics

I was reading my professor's notes on the equipartition theorem, and I have trouble understanding a specific passage in the derivation. I will explain some of notations used:

  1. $\Sigma(E) = \int_{\mathcal{H} < E}dq^{3N}dp^{3N} \sim \text{volume containing all the states with energy less than $E$} $
  2. $\Gamma(E) \sim \text{energy surface} $
  3. $ \Gamma_{\Delta}(E) = \int_{E<\mathcal{H}<E+\Delta}dq^{3N}dp^{3N} = \omega(E) \Delta $, where $\omega(E) = \partial{\Sigma(E)}/\partial E$, and $\Delta \ll E$
    This is the derivation:

Blockquote We start by calculating the expectation value of $x_i \frac{\partial H}{\partial x_j}$, where $x_i$ indicates a specific $p$ or $q$. I'll also write $dp^{3N}dq^{3N} \equiv dpdq$ for simplicity.
$$\left\langle x_i \frac{\partial H}{\partial x_j} \right \rangle = \frac{1}{\Gamma(E)} \int_{E< H <E+\Delta} dpdq\text{ } x_i \frac{\partial H}{\partial x_j} = \frac{\Delta}{\Gamma(E)}\frac{\partial}{\partial E}\int_{H < E} dpdq \text{ }x_i \frac{\partial H}{\partial x_j} $$
Using the product rule and noticing that $\frac{\partial x_i}{\partial x_j} = \delta_{ij}$ and that $E$ does not depend upon $x_j$ we obtain:
$$ \int_{H < E} dpdq \text{ } x_i \frac{\partial H}{\partial x_j} = \int_{H<E}dpdq \frac{\partial[x_i(H-E)]}{\partial x_j}-\delta_{ij}\int_{H<E} dpdq(H-E) $$
The first integral of the RHS can be rewritten as a surface integral,
where the surface is the one described by $H-E = 0$ and therefore vanishes
.

I'd like to understand why does the energy not depend upon the $x_j$'s, when kinetic energy, for instance, shows explicit dependence upon momenta.
And most importantly, how can I justify the second sentence in bold font? I honestly have no clue about what it means. If someone could explain in detail how to obtain that result, It'd be much appreciated.

Best Answer

The derivation is in the microcanonical ensemble. The internal energy $E$ is a fixed parameter in such an ensemble, not depending on anything (don't confuse it with the Hamiltonian). Therefore, its derivative with respect to any variable or parameter of the system is zero.

The sentence in bold is frequently encountered, even in textbooks, but it is quite a sloppy way of explaining what one is doing with the integral. There are two possibilities to prove the vanishing of the first integral on the RHS. Either one uses Gauss' theorem, the basic idea behind speaking about a transformation into a surface integral. There is a missing step because Gauss' theorem requires a vector field, which is not evident in the formula. However, this is a minor difficulty. See, for instance, the answers to this question.

Or, much simply (and also cleaner from the mathematical point of view, taking into account that the canonical coordinates cannot always be considered the components of a vector), one can eliminate the need for vector calculus remaining with integration by parts of one-dimensional integrals. The idea is to rewrite the integral $ \int_{H<E}dpdq \frac{\partial[x_i(H-E)]}{\partial x_j}$ as an integral on all the canonical coordinates but $x_j$ of an integral over $x_j$. Let's indicate by $d\Gamma_j$ the measure in the $(6N-1)$-dimensional space not containing the coordinate $x_j$ : $$ \int_{H<E}dpdq \frac{\partial[x_i(H-E)]}{\partial x_j}= \int d\Gamma_j\int d{x_j}\frac{\partial[x_i(H-E)]}{\partial x_j}. $$ The original domain in the phase space ($H<E$) is limited. Therefore, for any value of the other coordinates, each $x_j$ varies between a lower and an upper value (say $x_j^m$ and $x_j^M$) corresponding to the condition $H=E$. The integral on $x_j$ becomes $ \left. x_i(H-E) \right|^{x_j^M}_{x_j^m} $ which vanishes for such conditions on the one-dimensional extrema of integration.