Does convexity at a single point imply convexity w.r.t finite convex combinations

calculusconvex-analysisexamples-counterexamplesjensen-inequalityreal-analysis

Let $\phi:\mathbb (0,\infty) \to [0,\infty)$ be a continuous function, and let $c \in (0,\infty)$ be fixed.

Suppose that "$\phi$ is convex at $c$". i.e. for any $x_1,x_2>0, \alpha \in [0,1]$ satisfying $\alpha x_1 + (1- \alpha)x_2 =c$, we have
$$
\phi(c)=\phi\left(\alpha x_1 + (1- \alpha)x_2 \right) \leq \alpha \phi(x_1) + (1-\alpha)\phi(x_2) .
$$

Let $\lambda_i \in [0,1],x_i\in(0,\infty),i=1,\dots,k$ satisfy $\sum_{i=1}^k \lambda_i=1,\sum_{i=1}^k \lambda_ix_i=c$.
Does $$\phi(\sum_{i=1}^k \lambda_ix_i) \le \sum_{i=1}^k \lambda_i \phi(x_i)$$ hold
?

Does the answer change if we assume that $\phi$ is strictly decreasing in a neighbourhood of $c$?

Edit:

If $\phi$ is differentiable at $c$, then the answer is positive. Indeed, it is proved here that
$$
\phi(x) \ge \phi(c)+\phi'(c) (x-c)
$$

($f$ has a subgradient) which implies the required assertion.

I guess that there should be (non-differentiable) counter-examples, since the standard proof by induction on $k$ does not adapt to this case. (It considers convex combinations whose mean is different from $c$).

Also, note that the one-sided derivatives do not necessarily exist. I think that the question boils down to whether or not there exist a supporting line to the graph of $\phi$ at $c$.

Best Answer

Yes, the conclusion holds. From reading the new comments, I guess my solution is something like a "supporting line".

Lemma: For $x_1 \in (0,c)$ and $x_2 \in (c,+\infty)$ from "$\phi$ convex at $c$" follows

$$\frac{\phi(x_2)-\phi(c)}{x_2-c} \ge \frac{\phi(c)-\phi(x_1)}{c-x_1}.$$

Geometrically that means the line through the points $(c,\phi(c))$ and $(x_2,\phi(x_2))$ has at least the slope of the line through $(x_1,\phi(x_1))$ and $(c,\phi(c))$.

Proof of Lemma: The choice of

$$\alpha=\frac{x_2-c}{x_2-x_1} \Rightarrow 1-\alpha=\frac{c-x_1}{x_2-x_1}$$

leads to $\alpha x_1+(1-\alpha)x_2=c$. Note that because of $x_1 < c < x_2$ the quotients are defined and we have $\alpha,1-\alpha \ge 0$, so $\alpha \in [0,1]$ as required. So we can apply "$\phi$ convex at $c$" and get

$$\phi(c) \le \frac{x_2-c}{x_2-x_1}\phi(x_1) + \frac{c-x_1}{x_2-x_1}\phi(x_2).$$

We want to isolate $\phi(x_2)$ and multiply the inequality with the positive $\frac{x_2-x_1}{c-x_1}$ to get

$$\frac{x_2-x_1}{c-x_1} \phi(c) \le \frac{x_2-c}{c-x_1}\phi(x_1) + \phi(x_2) \Rightarrow \phi(x_2) \ge \frac{x_2-x_1}{c-x_1} \phi(c) - \frac{x_2-c}{c-x_1}\phi(x_1).$$

From the latter inequality we subtract $\phi(c)$ on both sides and then devide by the positive $(x_2-c)$:

$$\frac{\phi(x_2)-\phi(c)}{x_2-c} \ge \frac{x_2-x_1}{(c-x_1)(x_2-c)}\phi(c) - \frac{1}{c-x_1}\phi(x_1) - \frac{\phi(c)}{x_2-c}.$$

The term on the right hand side can now be simplified by first uniting the coeffcients before $\phi(c)$ then cancelling $(x_2-c)$

$$\begin{eqnarray} \frac{x_2-x_1}{(c-x_1)(x_2-c)}\phi(c) - \frac{1}{c-x_1}\phi(x_1) - \frac{\phi(c)}{x_2-c} & = & \frac{(x_2-x_1) - (c-x_1)}{(c-x_1)(x_2-c)}\phi(c) - \frac{1}{c-x_1}\phi(x_1)\\ & = & \frac{x_2-c}{(c-x_1)(x_2-c)}\phi(c) - \frac{1}{c-x_1}\phi(x_1)\\ & = & \frac{1}{c-x_1}\phi(c) - \frac{1}{c-x_1}\phi(x_1)\\ & = & \frac{\phi(c) - \phi(x_1)}{c-x_1},\\ \end{eqnarray} $$

which finally proves the lemma!


Now for the OP's problem. Let the $x_i, \lambda_i$ be given for $i=1,2,\ldots,k$ and let's assume they fulfill the stated properties.

If all $x_i$ are at least $c$, then we have $x_i=c$ for all $i=1,2,\ldots,k$, then the conclusion is trivial. So assume at least one $x_i < c$ exists. W.l.o.g. assume that $x_1,x_2,\ldots x_r < c$ while $x_{r+1},\ldots,x_k \ge c$ for some $r \in \{1,2,\ldots,k\}$.

Let

$$m:=\max_{i=1,2,\ldots,r} \frac{\phi(c)-\phi(x_i)}{c-x_i}.$$

Let $i_{max}$ be one index where the maximum is realized.

I claim that all points $(x_i,\phi(x_i))$ lie on or above the line

$$y=m(x-c)+\phi(c)$$

passing through $(c,\phi(c)).$

If $i \le r$ we know by the definition of $m$ that

$$\frac{\phi(c)-\phi(x_i)}{c-x_i} \le m,$$

so multiplying with the positive $(c-x_i)$ yields

$$\phi(c)-\phi(x_i) \le m(c-x_i) \Rightarrow \phi(c) + m(x_i-c) \le \phi(x_i),$$

as desired.

If $x_i=c$, then $(x_i,\phi(x_i))=(c,\phi(c))$ obviously lies on the line.

If $x_i > c$, then we know by the lemma (remember that $x_i > c, x_{i_{max}} < c$, so the lemma can be applied) that

$$\frac{\phi(x_i)-\phi(c)}{x_i-c} \ge \frac{\phi(c)-\phi(x_{i_{max}})}{c-x_{i_{max}}} = m,$$

so we get immediately

$$\phi(x_i)-\phi(c) \ge m(x_i-c) \Rightarrow \phi(x_i) \ge \phi(c) + m(x_i-c).$$

This proves that indeed all $(x_i,\phi(x_i))$ lie at or above the line $y=m(x-c)+\phi(c)$, and the proof now concludes easily:

$$\begin{eqnarray} \sum_{i=1}^k\lambda_i\phi(x_i) & \ge & \sum_{i=1}^k\lambda_i(\phi(c)+m(x_i-c))\\ & = & \sum_{i=1}^k\lambda_i\phi(c) + m \sum_{i=1}^k\lambda_ix_i - m \sum_{i=1}^k\lambda_ic\\ & = & \phi(c) \sum_{i=1}^k\lambda_i +mc - mc\times1\\ & = & \phi(c)\\ & = & \phi(\sum_{i=1}^k\lambda_ix_i),\\ \end{eqnarray}$$

which is ecactly the statement to prove.

Related Question