Q1: What is the ergodicity and ergodicity breaking in a Monte Carlo simulation of a statistical physics problem?
Q2: How does one ensure that the ergodicity is maintained ?
simulationsstatistical mechanics
Q1: What is the ergodicity and ergodicity breaking in a Monte Carlo simulation of a statistical physics problem?
Q2: How does one ensure that the ergodicity is maintained ?
Geant is a framework---which means that you use it to build applications that simulate the detector and physics you are interested in. The simulation can include all of physics and the complete detector including electronics and trigger (i.e. you can write your simulation so that it output a data file that looks just like the one you are going to get from the experiment1).2
The various parts of Geant are validated by being able to correctly predict the outcomes of experiments. Particular models are tuned on well known physics early in the analysis of the data. This allows you to get simulated optical properties, detector gains and so on correctly matched to the actual instrument.
Geant is also heavily documented. Read the introduction and the first two chapters of the User's guide for Application Developers, which will give you the basics. After that you can delve into the hairy details in the Physics and Software references. There is much, much too much to cover in a Stack Exchange answer. (I mean literally....if I tried I'd end up overrunning the 32k characters per post limit.)
It helps to know that Geant4 derives from Geant3 and earlier efforts. This thing has a history that goes back for decades and has been tested in thousands of experiments large and small.
The use in the Higgs search goes something like this
Actually, you did all of the above at lower precision several times during the design and funding phase and used those result to determine how much data you would have to collect, what kinds of instrumentation densities you needed, what data rate you had to be able to support and so on ad nauseum.
Once you have got the data, you start by showing that:
You can detect lots of well known physics in your detector (to validate the detector and find unexpected problems)5
That your model correctly represents the detector response to that well known physics (to let you debug and tune your model)
Then you may need to re-run some of the "expected" processing.
Only then can you try to compare data to expectation.6
1 Indeed the data format is often thrashed out and debugged from the MC before the experiment is even built.
2 For big, complicated experiments like those at the LHC Geant is usually paired with one or more external event generators. In the neutrino experiments I'm currently working on that means Genie and Cry. Not sure what the collider guys are using right now.
3 For speed reasons we often simulate the electronics and trigger outside of Geant proper, but this decision is made on a case by case basis.
4 Indeed the analyzer is often programmed and debugged from the MC output before there is real data.
5 This is also where most of the actual repetition of results in the particle physics world comes from. You won't get funding to repeat BigExper's measurement of the WingDing Sum Rule, but if your proposed NextGen spectrometer can do that as well as your spiffy New Physics (tm) it helps your case with the funding agencies.
6 Many of these steps will be done by more than one person/group in the collaboration to provide copious cross-checks and protection against embarrassing mistakes. (See also, OPERA's little issue last year...)
This model (the classical Heisenberg model, with 3-component spins, on a 2D $L\times L$ square lattice) is one of the challenging ones. I believe that the current thinking is that there is not a true phase transition, but instead it exhibits pseudocritical behaviour: the correlation length becomes extremely large, but not infinite. The latest publication that I found is by Y Tomita, Phys Rev E, 90, 032109 (2014), and there a finite-size-scaling analysis was employed on systems up to $1024\times1024$. Also, the Monte Carlo simulations employed special techniques (Swendsen-Wang cluster updating). I'm afraid that I couldn't find an open-access version of the paper cited above.
[EDIT following OP comment]
In the paper I cited, the XY model is also studied, and compared with the Heisenberg model. The conclusion seems to be that they are different, but it is very difficult to show the difference unless considerable computational effort is spent, and a complicated finite size analysis is conducted. In both cases a peak in the heat capacity curve is seen, and a "critical temperature" can be identified. So, there will appear to be a phase transition. The next question is whether this is a true transition, with thermodynamic properties and the correlation length scaling in a systematic way (with critical exponents) as the system size becomes very large. For the XY model, this seems to be the case; for the Heisenberg model, it seems not.
On the face of it, while a BKT phase transition for the XY model can be explained in terms of topological defects in the configurations of spins confined to the plane, the Heisenberg model doesn't have these well-defined defects. So a BKT transition is not expected for the Heisenberg model. Nonetheless, the system has been known for years to exhibit behaviour looking like a phase transition. It has been an area of active research, trying to prove it one way or the other. This paper I found is just the latest (I think) of several papers on the topic.
Best Answer
To complete the answers already given, ergodicity in MC simulations is a practical problem and not a conceptual one contrary to the ergodicity property in physics. Normally, if you sample your phase space with a Markov chain, it is possible to show that whatever the initial trial distribution, your Markov chain will eventually sample the Gibbs distribution associated to the statistical ensemble you are interested in.
In practice however, your system can be trapped in local minima of the potential energy surface and be ergodic only within those minima (akin to what would really happen in a supercooled liquid). This is an obvious case of ergodicity breaking in the sense suggested by sebastian above.
This can be tested quite easily as your simulations will give different averages depending on the initial condition for instance.
There are many algorithms involving multicanonical sampling or parallel tempering just to name these two that can get rid of this issue.