[Math] Ways to prove an inequality

ca.classical-analysis-and-odesconvexitymp.mathematical-physicssums-of-squares

It seems that there are three basic ways to prove an inequality eg $x>0$.

  1. Show that x is a sum of squares.
  2. Use an entropy argument. (Entropy always increases)
  3. Convexity.

Are there other means?

Edit: I was looking for something fundamental. For instance Lagrange multipliers reduce to convexity. I have not read Steele's book, but is there a way to prove monotonicity that doesn't reduce to entropy? And what is the meaning of positivity?

Also, I would not consider the bootstraping method, normalization to change additive to multiplicative inequalities, and changing equalities to inequalities as methods to prove inequalities. These method only change the form of the inequality, replacing the original inequality by an (or a class of) equivalent ones. Further, the proof of the equivalence follows elementarily from the definition of real numbers.

As for proofs of fundamental theorem of algebra, the question again is, what do they reduce too? These arguments are high level concepts mainly involving arithmetic, topology or geometry, but what do they reduce to at the level of the inequality?

Further edit: Perhaps I was looking too narrowly at first. Thank you to all contributions for opening to my eyes to the myriad possibilities of proving and interpreting inequalities in other contexts!!

Best Answer

I don't think your question is a mathematical one, for the question about what do all inequalities eventually reduce to has a simple answer: axioms. I interpret it as a metamathematical question and still I believe the closest answer is the suggestion above about using everything you know.

An inequality is a fairly general mathematical term, which can be attributed to any comparison. One example is complexity hierarchies where you compare which of two problems has the highest complexity, can be solved faster etc. Another one is studying convergence of series, that is comparing a quantity and infinity, here you find Tauberian theory etc. Even though you did not specify in your question which kind of inequalities are you interested in primarily, I am assuming that you are talking about comparing two functions of several real/complex variables. I would be surprised if there is a list of exclusive methods that inequalities of this sort follow from. It is my impression that there is a plethora of theorems/principles/tricks available and the proof of an inequality is usually a combination of some of these. I will list a few things that come to my mind when I'm trying to prove an inequality, I hope it helps a bit.

First I try to see if the inequality will follow from an equality. That is to recognize the terms in your expression as part of some identity you are already familiar with. I disagree with you when you say this shouldn't be counted as a method to prove inequalities. Say you want to prove that $A\geq B$, and you can prove $A=B+C^2$, then, sure, the inequality follows from using "squares are nonnegative", but most of the time it is the identity that proves to be the hardest step. Here's an example, given reals $a_1,a_2,\dots, a_n$, you want to prove that $$\sum_{i,j=1}^n \frac{a_ia_j}{1+|i-j|} \geq 0.$$ After you realize that sum is just equal to $$\frac{1}{2\pi}\cdot\int_{0}^{2\pi}{\int_{0}^{1}{\frac{1-r^{2}}{1-2r\cos(x)+r^{2}}\cdot |\sum_{k=1}^{n}{a_{k}e^{-ikx}}|^{2}dx dr}}$$ then, yes, everything is obvious, but spotting the equality is clearly the nontrivial step in the proof.

In some instances it might be helpful to think about combinatorics, probability, algebra or geometry. Is the quantity $x$ enumerating objects you are familiar with, the probability of an event, the dimension of a vector space, or the area/volume of a region? There is plenty of inequalities that follow this way. Think of Littlewood-Richardson coeficients for example.

Another helpful factor is symmetry. Is your inequality invariant under permuting some of its variables? While I don't remember right now the paper, Polya has an article where he talks about the "principle of nonsufficient reason", which basically boils down to the strategy that if your function is symmetric enough, then so are it's extremal points (there is no sufficient reason to expect assymetry in the maximal/minimal points, is how he puts it). This is similar in vein to using Langrange multipliers. Note however that sometimes it is the oposite of this that comes in handy. Schur's inequality, for example is known to be impossible to prove using "symmetric methods", one must break the symmetry by assuming an arbitrary ordering on the variables. (I think it was sent by Schur to Hardy as an example of a symmetric polynomial inequality that doesn't follow from Muirhead's theorem, see below.)

Majorization theory is yet another powerful tool. The best reference that comes to mind is Marshall and Olkin's book "Inequalities: Theory of Majorization and Its Applications". This is related to what you call convexity and some other notions. Note that there is a lot of literature devoted to inequalities involving "almost convex" functions, where a weaker notion than convexity is usually used. Also note the concepts of Schur-convexity, quasiconvexity, pseudoconvexity etc. One of the simplest applications of majorization theory is Muirhead's inequality which generalizes already a lot of classical inequalities and inequalities such as the ones that appear in competitions.

Sometimes you might want to take advantage of the duality between discrete and continuous. So depending on which tools you have at your disposal you may choose to prove, say the inequality $$\sum_{n=1}^{\infty}\left(\frac{a_1+\cdots+a_n}{n}\right)^p\le \left(\frac{p}{p-1}\right)^p \sum_{n=1}^{\infty}a_n^p$$ or it's continuous/integral version $$\int_{0}^{\infty}\left(\frac{1}{x}\int_0^x f(t)dt\right)^p dx \le \left(\frac{p}{p-1}\right)^p \int_{0}^{\infty} f(x)^p dx$$ I've found this useful in different occasions (in both directions).

Other things that come to mind but that I'm too lazy to describe are "integration preserves positivity", uncertainity principle, using the mean value theorem to reduce the number of variables etc. What also comes in handy, sometimes, is searching if others have considered your inequality before. This might prevent you from spending too much time on an inequality like $$\sum_{d|n}d \le H_n+e^{H_n}\log H_n$$ where $H_n=\sum_{k=1}^n \frac{1}{k}$.

Related Question