Intermediate inequalities; is there a way to know if you’re getting a “bad deal”

a.m.-g.m.-inequalitycontest-mathinequalityreal-analysisreference-request

Let's say I want to prove a contest-style inequality $f(a, b, c) + g(a, b, c) \ge h(a, b, c)$ in some $S \subseteq \mathbb{R}^3$. Suppose I want to make the RHS simpler by applying the AM-GM inequality, so that now it remains to prove $2\sqrt {f(a, b, c) g(a, b, c)} \ge h(a, b, c)$. However, this new inequality may or may not be true. So my questions are:

1) Is it known which inequalities are "better than others"? For example, when you solve an inequality do you think "Let me try to avoid using QM-AM, Am-HM is better"

2) If there are no general rules for question 1), is there a way for me to tell if I'm getting a "bad deal" by using some inequality?

3) Does the answer to $1)$ depend on the internal structure of the expressions you're applying the inequalities to? For example, is it possible that applying AM-GM to $a^2+2b+c$ and $b^2+2a+c$ gives a very bad inequality, but applying something like $\dfrac {ab}{c}$ and $\dfrac {bc}{a}$ gives a very good inequality?

If you have a book recomendation that deals with the topic I would appreciate it as well.

Best Answer

While I agree with the comments that this is an art rather than a science, there are patterns, and one important one (at least for me) came from The Cauchy-Schwarz Master Class (J. Michael Steele).

You typically get a "good deal" from inequalities when you're working in a region where the inequality is sharp.

As an (obvious) example, say you're working with the approximation $e^x \geq 1 + x$. Obviously this approximation is terrible most of the time (proof by picture below)

e^x >= 1 + x

But notice this inequality is sharp, (i.e. $e^x = 1+x$) when $x = 0$. This tells us (basically by continuity) that this equality is a good deal when $x \approx 0$. The graph above shows this.

This is a good example, because we know from calculus exactly how good a deal this approximation is, and why it becomes a better and better deal as $x \to 0$. Thankfully, this same heuristic can be used in many important inequalities.


Since you mention the AM-GM inequality, let's ask when is AM-GM sharp?

One can show (and indeed you should) that $(a_1 a_2 \cdots a_n)^{1/n} = \frac{a_1 + a_2 + \cdots + a_n}{n}$ iff the $a_i$ are all the same.

This gives us a clue as to when we should use the AM-GM inequality: we get a good deal whenever the $a_i$ are approximately equal! Let's try to use this to prove something fun: Carleman's Inequality.

$$\sum \limits_{k=1}^\infty (a_1 a_2 \cdots a_k)^{1/k} \leq e \sum \limits_{k=1}^\infty a_k$$

The left side of this is crying out for the AM-GM inequality! You might suspect that hitting the left side with AM-GM will give rise to something with the desired sum, plus some extra $\frac{1}{n}$ terms which we can massage into $(1 + \frac{1}{n})^n$ to get our factor of $e$. Unfortunately...

$$ \sum_{k=1}^n (a_1 a_2 \cdots a_k)^{1/k} \leq \sum_{k=1}^n \frac{1}{k} \sum_{j=1}^k a_j = \sum_{j=1}^n a_j \sum_{k=j}^n \frac{1}{k} $$

This sum will diverge in the limit, and it seems we're out of luck. The problem, as you may have guessed from the fact that I'm using this as an example, is that we are applying the AM-GM inequality where it is weak. If we can reframe the problem in a way that guarantees the $a_k$ are all roughly the same, then we can get a better bargain when using it.

Let's fix this problem with the least possible effort - we'll introduce a new sequence $c_k$ such that $c_1 a_1 \approx c_2 a_2 \approx c_3 a_3 \approx \cdots$. If we repeat our calculation with these new fudge factors, we can figure out later what exactly they should be to make the inequality work out.

$$ \begin{align*} \sum \limits_{k=1}^\infty (a_1 a_2 \cdots a_k)^{1/k} &= \sum \limits_{k=1}^\infty \frac{(a_1 c_1 a_2 c_2 \cdots a_k c_k)^{1/k}}{(c_1 c_2 \cdots c_k)^{1/k}}\\ &\leq \sum \limits_{k=1}^\infty \frac{a_1c_1 + a_2c_2 + \cdots + a_k c_k}{k(c_1 c_2 \cdots c_k)^{1/k}}\\ &= \sum \limits_{k=1}^\infty a_k c_k \sum \limits_{j=k}^\infty \frac{1}{j(c_1 c_2 \cdots c_j)^{1/j}} \end{align*} $$

Ok, at some point we have to actually do work, and this is where that happens. We want to make $$s_k = c_k \sum \limits_{j=k}^\infty \frac{1}{j(c_1 c_2 \cdots c_j)^{1/j}}$$ a convergent sequence. Preferably one which is bounded above by $e$, so that we can recover the desired inequality. We again take the easiest path to making this sum converge - let's force it to telescope. Recall

$$\sum \limits_{j=k}^\infty \frac{1}{j(j+1)} = \sum \limits_{j=k}^\infty \frac{1}{j} - \frac{1}{j+1} = \frac{1}{k} $$

Our sequence $c_k$ is still undefined. If we define it so that $(c_1 c_2 \cdots c_j)^{1/j} = j+1$, that forces

$$ s_k = c_k \sum \limits_{j=k}^\infty \frac{1}{j(c_1 c_2 \cdots c_j)^{1/j}} = c_k \sum \limits_{j=k}^\infty \frac{1}{j(j+1)} = \frac{c_k}{k} $$

But what is $c_j$? Notice $(c_1 c_2 \cdots c_{j-1}) = j^{j-1}$ and $(c_1 c_2 \cdots c_j) = (j+1)^j$. Thus

$$c_j = \frac{(j+1)^j}{j^{j-1}} = j \left ( 1 + \frac{1}{j} \right )^j$$

Now we see the light at the end of the tunnel!

$$ \begin{align*} \sum \limits_{k=1}^\infty (a_1 a_2 \cdots a_k)^{1/k} &\leq \sum \limits_{k=1}^\infty a_k s_k\\ &= \sum \limits_{k=1}^\infty a_k \frac{c_k}{k}\\ &= \sum \limits_{k=1}^\infty a_k \left ( 1 + \frac{1}{k} \right )^k\\ &\leq e \sum \limits_{k=1}^\infty a_k \end{align*} $$


This example was fairly long, but hopefully shows how we can ensure we get a "good deal" from our inequalities just by ensuring we use them where they are sharp. This example was (almost verbatim) taken from the book I mentioned at the top of the post, The Cauchy-Schwarz Master Class. It is absolutely worth reading, and changed the way that I (a dyed-in-the-wool algebraist) approach analysis and inequalities. It is packed full of tricks and tips for getting the most out of your inequalities, and does a good job explaining how and when to apply them at their full power. I found it useful, and I hope others do too!


I hope this helps ^_^

Related Question