Quantum Field Theory – Why Do Irrelevant Operators Require Infinitely Many Counterterms?

dimensional analysiseffective-field-theoryinteractionsquantum-field-theoryrenormalization

As far as I understand it, in the Wilsonian picture of renormalization, we view a theory as having some fixed cutoff and bare couplings, and integrate out high-momentum modes to understand what happens at low momentum. We say that an operator is relevant if its coupling constant grows when we go to low momentum scales, and irrelevant if it shrinks.

Now, in the "usual" picture of renormalization, we have a QFT which we want to define as a continuum limit of a theory with a cutoff, i.e. the limit where the cutoff goes to infinity. We want to take this limit holding the physical coupling constants at some energy scale to be fixed; to do this, we add cutoff-dependent counterterms to the bare Lagrangian. We say that an interaction is renormalizable if we only need to add a finite number of counterterms, and non-renormalizable if we need an infinite number of counterterms.

However, I don't understand how these two pictures fit together. In particular, it is usually stated that irrelevant operators are non-renormalizable, and relevant operators are renormalizable, but this doesn't seem obvious to me. Can someone explain why this is true?

Best Answer

At least at the operational level, whether an operator is relevant or irrelevant (in the IR) tells you about it's canonical scaling dimension.

I think the BPHZ picture of renormalization might help here. Given a physical theory, we'd like to estimate which Feynman graphs of that theory diverge. You can estimate a "superficial degree of divergence" for each Feynman diagram by considering the canonical scaling dimension of each vertex or edge in the diagram. Say you have a diagram with a vertex corresponding to an IR-irrelevant (UV-relevant) interaction term. Then, in the UV (for large momenta), it will cause the amplitude to diverge. Now, as you go to higher loop order, or insert more of those vertices at same loop order, the divergence becomes worse. And as you add more and more such vertices, the corresponding diagrams become more and more important in the UV, due to the couplings being UV-relevant! The essence is that there are infinitely many ("independent") Feynman diagrams which diverge, so you can't cook up enough counterterms to cancel all these divergences. So these terms in the lagrangian give rise to non-renormalizable theories.

See, for eg. Peskin & Schroeder section 10.1, or the link above for how to calculate the superficial degree of divergence of any diagram. Another technicality is that you don't actually consider Feynman graphs, you instead consider subgraphs. What that means is that the external (uncontracted) legs can be off-shell, as if they were sitting inside a bigger Feynman diagram.

Given your question, maybe you've already come across the stuff I've said and are asking for something else, in which case your question is not clear to me.