[Physics] Divergent bare parameters/couplings: what is the physical meaning of it? Do this have any relation with wilson’s renormalization group approach

quantum-field-theoryrenormalization

I understand that bare parameters in the Lagrangian are different from the physical one that you measure in an experiment. I'm wondering if the fact that they are divergent has any physical meaning? If they weren't the divergencies that arise in loop calculation cannot "be put" anywhere else. I'm OK with them being different from the physical one, but why is it OK for them to be divergent?

EDIT:
I have some difficulties in understanding if this has any connection with renormalization group approach of Wilson.
They seems to be quite different.
In fact in the last case you start with an effective field theory valid up to scale Landa (sharp cutoff) and you "integrate out" the high momentum part of the action to see how the theory behaves at low energies.
On the other approach you want (after the renormalization ) to let the cutoff go to infinity and find finite results. That means that the theory is not sensitive anymore about the high energy/short scale behavior.

The running of the coupling in Wilson's approach has nothing to do with the bare parameters going to infinity when the cutoff is removed right?

Is there any reference that tries to unify this 2 different approaches?
I read deeply this two books:
Quantum and statistical physics – le bellac
Field theory, the renormalization group – amit
Do you reccomend any other books/articles?

Best Answer

There are several interesting questions in the main question, plus a point in the comment that I want to address. Similar ideas are discussed here and in arxiv 0702.365.

Disclaimer : I will only speak about QFT that have a finite UV cut-off $\Lambda$. That remove all the complications of the definition of the (non-pertubative) continuum limit $\Lambda\to \infty$, as discussed in the answer linked above. It is only if you believe that a particular QFT is the absolute, true description of the universe that this limit is interesting. And it's pretty sure that it is not the case. Nevertheless, we can be interested in the limit where $\Lambda$ is very large compare to all other energy scales (which is equivalent to take $\Lambda\to\infty$).

First of all, bare parameters can be physical, and measured. They just don't correspond to the same quantities than renormalized parameters. For instance, take the (classical) Ising model. It has one coupling constant $K=J/T$. Using standard calculations, one can rewrite the partition function as a field theory with action $$ S[\phi_i]=\sum_{ij} t_{ij} \phi_i \phi_j- \sum_i \ln \cosh \phi_i, $$ where $\phi_i$ is the value of the field on a lattice site $i$, and $t_{ij}$ is related to the interaction energy $K$ (see for example this article for the details). Thus, if you know $K$, which is accessible (for instance in the case of simulations), you know the bare parameters ! You can also measure this kind of parameters experimentally (exchange energy).

Notice that this field theory is non-perturbative (if one expands the potential, all the coupling constants (and there are an infinity of them) are of the same order ! The only reason one can use the pertubative $\phi^4$ theory to describe the Ising model is because one is usually interested only in the universal quantities, that don't care about the details of the microscopic theory, as long as the universality class is the same.

For the sake of completeness : if one expand the potential (the $\ln\cosh$) to the order four, the quadratic term will be called the mass term and the fourth term the interaction. There is also a contribution to the mass coming from $t_{ii}$. One can then show that for $K$ large enough, the potential has two non trivial minimum, corresponding to the ferromagnetic phase. The critical value of $K$ for the transition, noted $K^0_c$, is at the mean-field level both wrong and non-physical.

So far so good. Now let's add the fluctuations of the field, that will "renormalize" the theory. At one loop, one sees that the quadratic term (the "mass") has a correction which is proportional to some power of the cut-off, that is, it depends on the way the regulation is made, whether the lattice is cubic or triangular, etc. Is it a problem ? Not at all. This is just telling you that the (real, physical) critical coupling $K_c$ is non-universal, it depends strongly on the microscopic details of the system. In some sense, the calculation of these "divergent" (in reality, cut-off dependent) integrals corresponds to the calculation of the critical temperature, knowing the microscopic physics.

From the Wilsonian RG, we know that some quantities will depend on the cut-off, such as the critical temperature. They are usually very hard to compute using a field theory, since the initial action is non-pertubative and one can not use the pertubative RG (wilsonian or not). Only non-pertubative schemes (numerical approaches, or the non-pertubative RG discussed in the arxiv articles linked above) can access these quantities. But there are universal quantities, such as the critical exponents, that can be computed with pertubative approach, as long as one stays in the same universality class. To compute these quantities, one has to be close to the fixed point of the RG, that is, at energy very low compare to the microscopic scales, equivalent to take $\Lambda\to \infty$.

This leads us to see the difference between Wilson RG and "old school" RG. In the former, one is imposing the microscopic value of $K$, and then look at what is the physical mass. In the latter, one imposes the physical value of the mass, and does not care about the microscopic details, so one wants to send $\Lambda\to \infty$. One thus has to absorb the "correction" to the mass in order to fix it.

So, to (finally, but partially) answer your question "The running of the coupling in Wilson's approach has nothing to do with the bare parameters going to infinity when the cutoff is removed right?" :

In the Wilsonian approach, one starts from the microscopic scale $\Lambda$ and looks at what's going at smaller energy, whereas in the "standard" approach, one fixes than macroscopic scale and sends $\Lambda\to \infty$ in order to effectively probe smaller and smaller energy scales.