I've been reading a bit about finite temperature quantum field theory, and I keep coming across the claim that when one Euclideanizes time
$$it\to\tau,$$
the time dimension becomes periodic, with period related to the inverse temperature $\beta$. Can someone please explain where the periodicity comes from and how we know to identify the period with $\beta$?
Quantum Field Theory – Why is Euclidean Time Periodic
quantum-field-theorytemperaturethermal-field-theorytimewick-rotation
Related Solutions
Lots of different ways to answer, but none of them can be too intuitive since imaginary time is, well, imaginary. But here is one attempt to make the result more or less self-evident.
The basic object to calculate in quantum statistical mechanics (in thermal equilibrium, in the canonical ensemble) is the partition function (with potential insertions if you want to calculate correlation functions):
$$Z= \operatorname{Tr}(e^{-\beta H})= \sum_\psi \langle \psi(0)|e^{-\beta H}|\psi(0) \rangle$$
where $H$ is the Hamiltonian and we have a sum over any complete set of states $\psi$, written in the Schrödinger picture at some fixed time which we take to be $t=0$. In that picture the time evolution of a state is
$$|\psi(t)\rangle = e^{-i t H}|\psi(0)\rangle$$
The basic observation now is that the Boltzmann factor $e^{-\beta H}$ can be regarded as an evolution of the state $\psi$ over imaginary time period $-i \beta$. Therefore we can write:
$$Z= \sum_\psi \langle \psi(0)|\psi(-i\beta) \rangle$$
This is now the vacuum amplitude (with possible insertions) which is the sum over all states $\psi$ in some arbitrary complete basis. Except that you propagate any final states with time $ i \beta$ with respect to the initial state. In other words however you choose to calculate your vacuum amplitude (or correlation function) — a popular method is a path integral — you have to impose the condition that the initial and final states are the same up to that imaginary time shift. This is the origin of the imaginary time periodicity.
Well, the singularity does not concern the differentiable structure: Even around the tip of a cone (including the tip) you can define a smooth differentiable structure (obviously this smooth structure cannot be induced by the natural one in $R^3$ when the cone is viewed as embedded in $R^3$). Here the singularity is metrical however! Consider a $2D$ smooth manifold an a point $p$, suppose that a smooth metric can be defined in a neighborhood of $p$, including $p$ itself. Next consider a curve $\gamma_r$ surrounding $p$ defined as the set of points with constant geodesic distance $r$ from $p$. Let $L(r)$ be the (metric) length of that curve. It is possible to prove that: $$L(r)/(2\pi r) \to 1\quad \mbox{ as $r \to 0$.}\qquad (1)$$ Actually it is quite evident that this result holds. We say that a $2D$ manifold, equipped with a smooth metric in a neighborhood $A-\{p\}$, of $p$ (notice that now $p$ does not belong to the set where the metric is defined), has a conical singularity in $p$ if: $$L(r)/(2\pi r) \to a\quad \mbox{ as $r \to 0$,}$$ with $0<a<1$.
Notice that the class of curves $\gamma_r$ can be defined anyway, even if the metric at $p$ is not defined, since the length of curves and geodesics is however defined (as a limit when an endpoint terminates at $p$). Obviously, if there is a conical singularity in $p$, it is not possible to extend the metric of $A-\{p\}$ to $p$, otherwise (1) would hold true and we know that it is false.
As you can understand, all that is independent from the choice of the coordinates you fix around $p$. Nonetheless, polar coordinates are very convenient to perform computations: The fact that they are not defined exactly at $p$ is irrelevant since we are only interested in what happens around $p$ in computing the limits as above.
Yes, removing the point one would get rid of the singularity, but the fact remains that it is impossible to extend the manifold in order to have a metric defined also in the limit point $p$: the metric on the rest of the manifold remembers of the existence of the conical singularity!
The fact that the Lorentzian manifold has no singularities in the Euclidean section and it is periodic in the Euclidean time coordinate has the following physical interpretation in a manifold with a bifurcate Killing horizon generated by a Killig vecotr field $K$. As soon as you introduce a field theory in the Lorentzian section, the smoothness of the manifold and the periodicity in the Euclidean time, implies that the two-point function of the field, computed with respect to the unique Gaussian state invariant under the Killing time and verifying the so called Hadamard condition (that analytically continued into the Euclidean time to get the Euclidean section) verifies a certain condition said the KMS condition with periodicity $\beta = 8\pi M$.
That condition means that the state is thermal and the period of the imaginary time is the constant $\beta$ of the canonical ensemble described by that state (where also the thermodynamical limit has been taken). So that, the associated "statistical mechanics" temperature is: $$T = 1/\beta = 1/8\pi M\:.$$
However the "thermodynamical temperature" $T(x)$ measured at the event $x$ by a thermometer "at rest with" (i.e. whose world line is tangent to) the Killing time in the Lorentzian section has to be corrected by the known Tolman's factor. It takes into account the fact that the perceived temperature is measured with respect to the proper time of the thermometer, whereas the state of the field is in equilibrium with respect to the Killing time. The ratio of the notions of temperatures is the same as the inverse ratio of the two notions of time, and it is encapsulated in the (square root of the magnitude of the) component $g_{00}$ of the metric $$\frac{T}{T(x)}=\frac{dt_{proper}(x)}{dt_{Killing}(x)} = \sqrt{-g_{00}(x)}\:.$$ In an asymptotically flat spacetime, for $r \to +\infty$, it holds $g_{00} \to -1$ so that the "statistical mechanics" temperature $T$ coincides to that measured by the thermometer $T(r=\infty)$ far away from the black hole horizon. This is an answer to your last question.
Best Answer
I don't think that Wick rotated time $\tau$ is periodic by itself. But it turns out that thermal averages of operators are periodic with respect to the variable $\tau$. Consider a generic time dependent operator $\hat{A}(\tau)$ with the standard time evolution expansion $\hat{A}(\tau) = e^{\hat{H}\tau} \hat{A}(0) e^{-\hat{H}\tau}$ and consider its thermal average $A(\tau) \equiv \hat{\left\langle A (\tau) \right\rangle } = Z^{-1} \mathrm{Tr}[e^{-\beta \hat{H} }\hat{A}(\tau)]$, where $Z$ is the parition function. You can prove rather simply that $A(\tau + \beta) = A(\tau)$ by exploiting firstly the fact that $ e^{-\beta\hat{H}} e^{\beta\hat{H}} = 1$ and secondly the cyclic property of the trace (I'll leave this as an exercise).
However, not all the objects that we are interested in are necessarily periodic. A remarkable example is the Green function at positive time $\tau \geq 0$ $$ G_{kp}(\tau) = - \left\langle \hat{\psi}_k(\tau) \hat{\psi}_p^{\dagger}(0) \right\rangle $$ which is written in terms of time dependent field operators. In fact you can prove that $G_{kp}(\tau+\beta) = \zeta G_{kp}(\tau)$, where $\zeta = +1$ if $\hat{\psi}$ is a bosonic operator, and $\zeta = -1$ if it is fermionic, so that the function is either periodic or antiperiodic.
In conclusion, the (anti)periodicity of functions with respect to euclidean time relies on how you compute thermal averages.