Errors in particle physics are of two kinds. Statistical, and systematic. Statistical is the usual standard deviation of gausian distributions, sqrt(n)/N for 1 sigma. It is the systematics that take a lot of effort, and often are not taken well into account.

Systematic errors come from

1) the background to the signal expected. The background is calculated theoretically and entered into a monte carlo program that simulates the production of the data. There are also errors in the parameters the theory has used that enter as systematics.

2)Effects from the limitations of the measuring apparatus, which is also inserted into the monte carlo simulation of events

3) from the method of analysis, i.e. the cuts made in order to isolate the signal in the data and in the simulated events.

Ideally, the monte carlo simulation will have many more events than the data and so the statistical error from the MC can be ignored. One should estimate the errors from uncertainty in theory and from defects of detectors by varying the parameters in the MC program to the 1 sigma level of the important parameters and cuts and observing the change in the distributions.

Of course there are so many parameters and cuts that this procedure is not adhered to strictly, as happened with the new 3 sigma announcement of the "fifth force" by CDF, where the errors are just statistical from the events. That is why for discovery one asks 5 sigma. It is very hard to have an effect of 5 sigma due to the systematics enumerated above, whereas 3 sigma announcements have often disappeared . The ALEPH Higgs for example was a 3 sigma one that disappeared when the other 3 LEP experiments looked. I have known a 4 sigma resonance that was the effect of not estimating the systematics of the cuts. 5 sigma is for playing it safe.

I should add here that the different systematics in different experiments are the main reason why at least two expensive experimental set ups are approved and built in the collider setups: independent confirmation in parallel .

Edit: I want to correct this entry due to the discussions on statistical significance for the very much in the physics news OPERA neutrino speed measurement. In blog discussions I learned that the current way of dealing with systematic errors is of assuming randomness and adding them in quadrature. One interested in the details of how particle physics is treating errors should go to the CERN Yellow Report 99-03, July 1999, that has an html version.

The pseudorapidity is not Lorentz invariant, while the rapidity is. The pseudorapidity is equal to the rapidity in the limit $m\ll p$ so it is generally used for light particles. For many jets the mass is not expected to be small and therefore the rapidity is a more convenient choice.

At hadron colliders jets are defined starting from the energy deposit of the particles in the calorimeter. Jets are groups of topologically related energy deposits in the calorimeters. Incoming particles usually deposit their energy in many calorimeter cells. Cluster algorithms are designed to group cells deposits belonging to the same particle forming three-dimensional clusters. The cluster energy is calculated as the sum of the cell energies and calibrated to account for the energy deposited outside the cluster and in dead material. The choice of the cluster algorithm to be used depends on the details of the calorimeter.

Jets are then reconstructed by applying a "jet algorithm" to the ensemble of
reconstructed clusters. The cluster associated to the jet are selected by the jet algorithm. The cluster variables which are relevant for the jet definition are the direction with respect to the interaction point and the sum of the energy in its cells. From these variables, a 0 mass four-vector is associated to each single cluster. The four-vector associated to the jet, and therefore its mass, is given by the sum of the four-vectors of the jet clusters.

## Best Answer

The equation you present is perhaps not the most intuitive, but it falls under the more general "forward-backward" ratio or

asymmetryequation, which looks something like$$\mathcal{A}=\dfrac{F-B}{F+B},$$

where $F$ and $B$ are quantities evaluated in the defined "forward" and "backward" regions respectively. The idea being that any deviation of $\mathcal{A}$ from 0 would indicate such an "asymmetry", whatever

thatis.Since you ask in the context of particle physics, let's try to connect it to a practical example.

In the 1990's, people were colliding electrons and positrons and looking at the various outcomes (

final states), particularly at pairs of muons and anti-muons$$e^++e^-\to\mu^++\mu^-.$$

Now, this process could be mediated by a photon and would have a cross-section $\sigma_\gamma\propto (1+\cos^2\theta)$. However, as it was historically realized, it can also be mediated through a $Z^0$ boson, which leads to a much more complicated $\sigma_Z$, such that the final observable cross-section actually results from the

interferenceof these two possible processes.Instead of being an even function of the scattering angle $\theta$, the cross-section $\sigma_\mathrm{tot}$ now favours "one side" rather than the other; defining "forward" and "backward" regions appropriately, and counting the number of final states $F$ and $B$ produced in those, you end up with a non-zero asymmetry $\mathcal{A}$, which can in turn tell you a lot about the various parameters that go in your cross-section and the exact nature of electroweak interactions etc.

To make the idea more visual, the following plot is from the JADE experiment. The solid line is $\sigma_\gamma$, while the dashed line (which seems like a much better fit to data) includes the $Z$ interference.