[Physics] Higgs mass and the hierarchy problem

fine-tuninghiggsquantum-field-theoryregularizationrenormalization

I was wondering what is the opinion about importance of the hierarchy problem in the hep community? I'm still a student and I don't really understand, why there is so much attention around this issue.

1 loop corrections to the Higgs mass are divergent – in the cut-off regularization proportional to $\Lambda^2$ and therefore require large fine tuning between the parameters to make those corrections small. But this kind of problem do not appear in the dimensional regularization.

People like the value of $\Lambda$ to be very large, with an argument that it should correspond to same energy scale at which our theory breaks down. I don't think, that we should treat the scale $\Lambda$, as some kind of a physical scale of our model cut-off, as it is just a parameter to regularize the integral. Just like the $4+\epsilon$ dimension in the dimensional regularization is not a physical thing. Why do we apply a physical meaning to $\Lambda$? Not to mention the troubles with the Lorentz invariance.

Maybe the hierarchy problem is an argument that the cut-off regularization scheme is just not right to use?

Best Answer

Whether you do your calculations using a cutoff regularization or dimensional regularization or another regularization is just a technical detail that has nothing to do with the existence of the hierarchy problem. Order by order, you will get the same results whatever your chosen regularization or scheme is.

The schemes and algorithms may differ by the precise moment at which you subtract some unphysical infinite terms etc. Indeed, the dimensional regularization cures power-law divergences from scratch. But the hierarchy problem may be expressed in a way that is manifestly independent of these technicalities.

The hierarchy problem is the problem that one has to fine-tune actual physical parameters of a theory expressed at a high energy scale with a huge accuracy – with error margins smaller than $(E_{low}/E_{high})^k$ where $k$ is a positive power – in order for this high-energy theory to produce the low-energy scale and light objects at all.

If I formulate the problem in this way, it's clear that it doesn't matter what scheme you are using to do the calculations. In particular, your miraculous "cure" based on the dimensional regularization may hide the explicit $\Lambda^2$ in intermediate results. But it doesn't change anything about the dependence on the high-energy parameters.

What you would really need for a "cure" of the physical problem is to pretend that no high-energy scale physics exists at all. But it does. It's clear that the Standard Model breaks before we reach the Planck energy and probably way before that. There have to be more detailed physical laws that operate at the GUT scale or the Planck scale and those new laws have new parameters.

The low-energy parameters such as the LHC-measured Higgs mass 125 GeV are complicated functions of the more fundamental high-energy parameters governing the GUT-scale or Planck-scale theory. And if you figure out what condition is needed for the high-scale parameters to make the Higgs $10^{15}$ times lighter than the reduced Planck scale, you will see that they're unnaturally fine-tuned conditions requiring some dimensionful parameters to be in some precise ranges.

More generally, it's very important to distinguish true physical insights and true physical problems from some artifacts depending on a formalism. One common misconception is the belief of some people that if the space is discretized, converted to a lattice, a spin network, or whatever, one cures the problem of non-renormalizability of theories such as gravity.

But this is a deep misunderstanding. The actual physical problem hiding under the "nonrenormalizability" label isn't the appearance of the symbol $\infty$ which is just a symbol that one should interpret rationally. We know that this $\infty$ as such isn't a problem because at the end, it gets subtracted in one way or another; it is unphysical. The main physical problem is the need to specify infinitely many coupling constants – coefficients of the arbitrarily-high-order terms in the Lagrangian – to uniquely specify the theory. The cutoff approach makes it clear because there are many kinds of divergences that differ and each of these divergent expressions has to be "renamed" as a finite constant, producing a finite unspecific parameter along the way. But even if you avoid infinities and divergent terms from scratch, the unspecified parameters – the finite remainders of the infinite subtractions – are still there. A theory with infinitely many terms in the Lagrangian has infinitely many pieces of data that must be measured before one may predict anything: it remains unpredictive at any point.

In a similar way, fine-tuning required for the high-energy parameters is a problem because using the Bayesian inference, one may argue that it was "highly unlikely" for the parameters to conspire in such a way that the high-energy physical laws produce e.g. the light Higgs boson. The degree of fine-tuning (parameterized by a small number) is therefore translated as a small probability (given by the same small number) that the original theory (a class of theory with some parameters) agrees with the observations.

When this fine-tuning is of order $0.1$ or even $0.01$, it's probably OK. Physicists have different tastes what degree of fine-tuning they're ready to tolerate. For example, many phenomenologists have thought that even a $0.1$-style fine-tuning is a problem – the little hierarchy problem – that justifies the production of hundreds of complicated papers. Many others disagree that the term "little hierarchy problem" deserves to be viewed as a real one at all. But pretty much everyone who understands the actual "flow of information" in quantum field theory calculations as well as the basic Bayesian inference seems to agree that fine-tuning and the hierarchy problem is a problem when it becomes too severe. The problem isn't necessarily an "inconsistency" but it does mean that there should exist an improved explanation why the Higgs is so unnaturally light. The role of this explanation is to modify the naive Bayesian measure – with a uniform probability distribution for the parameters – that made the observed Higgs mass look very unlikely. Using a better conceptual framework, the prior probabilities are modified so that the small parameters observed at low energies are no longer unnatural i.e. unlikely.

Symmetries such as the supersymmetry and new physics near the electroweak scale are two major representatives of the solution to the hierarchy problem. They eliminate the huge "power law" dependence on the parameters describing the high-energy theory. One still has to explain why the parameters at the high energy scale are so that the Higgs is much lighter than the GUT scale but the amount of fine-tuning needed to explain such a thing may be just "logarithmic", i.e. "one in $15\ln 10$" where 15 is the base-ten logarithm of the ratio of the mass scales. And this is of course a huge improvement over the fine-tuning at precision "1 in 1 quadrillion".

Related Question