Generalizing a logarithmic inequality

inequalityinformation theorylogarithmsreal-analysis

Let $\{x_i\}_{i=1}^N$ and $\{y_i\}_{i=1}^N$ be real numbers in the interval (0,1). Define for each $i$
$$\alpha_i = x_i (1-x_i) \log^2 \frac{x_i (1-y_i)}{y_i (1-x_i)}$$
and
$$\beta_i = x_i \log \frac{x_i}{y_i} + (1-x_i) \log \frac{1-x_i}{1-y_i}.$$
You may recognize $\beta_i$ as a Kullback-Leibler divergence between two Bernoulli distributions, though it's not clear to me that this is useful.

I conjecture that
$$
\require{enclose}\enclose{horizontalstrike}{\frac{\sum_{i=1}^N \alpha_i}{\left(\sum_{i=1}^N \beta_i\right)^2} \le \left(\sum_{i=1}^N (x_i-y_i)^2\right)^{-1}.}
$$

I can prove it when $N=1$ (though not in a particularly elegant or enlightening way), and I have numerical evidence for its truth when $N=2.$ Can anyone prove or disprove for general $N$?

Update: Leonbloy disproved the above by counterexample.

Assuming it is true, can you derive a similar upper bound for the quantity
$$
\frac{\sum_{i=1}^N \alpha_i \gamma_i}{\left(\sum_{i=1}^N \beta_i\right)^2}
$$

for positive real numbers $\{\gamma_i\}_{i=1}^N$?

Edit:
To be clear, my proof of the $N=1$ case is simply to define
$$
f_x(y) = (x-y) \sqrt{x(1-x)} \log \frac{x(1-y)}{y(1-x)} – x\log \frac{x}{y} – (1-x) \log \frac{1-x}{1-y},
$$

and differentiate twice to show that $f_x(y) \le f_x(x) = 0.$ I do not think it generalizes easily.

Edit 2: I now suspect
$$
\require{enclose}\enclose{horizontalstrike}{\frac{\sum_{i=1}^N \alpha_i \gamma_i}{\left(\sum_{i=1}^N \beta_i\right)^2} \le \sqrt{\frac{1}{N}\sum_{i=1}^N \gamma_i^2} \left(\sum_{i=1}^N (x_i-y_i)^2\right)^{-1}}
$$

Is it true? Can it be tightened?

This is false. Leonbloy provided a counterexample.

Edit 3: A new conjecture, which for my purposes would be the most useful to prove: let $\{p_i\}_{i=1}^N \in [0,1]$ with $\sum_i p_i =1$ and $\alpha_i, \beta_i, \gamma_i$ as above. Then
$$ \frac{\sum_i p_i \alpha_i \gamma_i}{\left(\sum_i p_i \beta_i\right)^2} \le \frac{\sum_i p_i \gamma_i}{ \sum_i p_i (x_i-y_i)^2}.$$

Best Answer

Edit: conjecture disproved. See below


Not an answer, but: Let's see what happens around $x_i = y_i$. Let $y_i = x_i + d_i$. Doing a Taylor expansion around $d_i=0$ (and using natural logarithms) we get

$$\alpha_i = \frac{d_i^2}{x_i(1-x_i)} + O(d_i^3)$$

$$\beta_i = \frac{d_i^2}{2\,x_i(1-x_i)} + O(d_i^3)$$

Then, at first order, letting $A = \sum \alpha_i$, $B= \sum \beta_i$ and $D = \sum d_i^2$, and noticing that $ 4 x_i(1-x_i)\le 1$ we get

$$B^2 \approx \frac14 A^2 = A \sum \frac{d_i^2}{4 x_i(1-x_i)} \ge A D $$

which agrees with your conjecture (but, of course, it does not prove it).

This reasoning does not seem to give support to your second conjecture.

Added: this can be generalized for weighted sums $B' = \sum p_i \beta_i$ , etc, leading some support for your third conjecture... but only with $\gamma_i=1$. With this restriction, it also seems right from numerical experimentation.

Added: the third conjecture is false even with $\gamma_i=1$. Counterexample: $x=p=(0.1, 0.3, 0.6)$, $y=(0.001,0.666,0.333)$

Added 2: the initial conjecture is false. A counterexample : $x=(0.1, 0.3)$ $y=(0.000001, 0.847)$