Minimum value of the expectation $\mathbb{E}[ X_1 X_2 / (X_1^2 + X_2^2) ]$

expectationinequalityprobability theory

Let $X_1$ and $X_2$ be i.i.d. random variables from a distribution $D$ on the real numbers with finite variance (and therefore finite mean). Assume that the probability of $X_i = 0$ is $0$. Must it be true that
$$
\mathbb{E}\left[ \frac{X_1 X_2}{X_1^2 + X_2^2} \right] \ge 0?
$$
If not, what is the infimum over all such distributions of this expectation?

Comments

The expectation is always finite. It is possible for the expectation to be $0$, when $D$ is symmetric about $0$. My conjecture is that this expectation is necessarily nonnegative. Of course without the denominator, $\mathbb{E}[X_1 X_2] = \mathbb{E}[X_1] \cdot \mathbb{E}[X_2] = \mu^2 \ge 0$. But with the denominator, it is not so clear.

I imagine this may be very elementary: I am not an expert in most inequalities used in probability theory.

I tried expanding the fraction with partial fractions over the complex numbers, getting
$$
\frac{X_1 X_2}{X_1^2 + X_2^2} = \frac{\tfrac12 X_2}{X_1 + i X_2} + \frac{\tfrac12 X_2}{X_1 – i X_2},
$$
but I don't have an idea for how to evaluate these expectations, either.

This question is a result of my previous question. Specifically we can write
$$
\mathbb{E}\left[ \frac{(X_1 + X_2)^2}{X_1^2 + X_2^2}\right] = 1 + 2 \cdot \mathbb{E}\left[ \frac{X_1 X_2}{X_1^2 + X_2^2} \right],
$$
and in the answer to my previous question it seemed to be the case that the former expectation never goes below $1$. This is equivalent to the present question about the latter expectation.

Best Answer

One can check that the `kernel' $k(u,v)=uv/(u^2+v^2)$ is positive semidefinite. For instance by noting that $$\tag{1}k(u,v)=\int_0^\infty (ue^{-u^2x})(ve^{-v^2x})\,dx.$$ See this wikipedia article for basic facts about these functions.

The desired inequality is a direct consequence of this: your expectation, $\mathbb Ek(X_1,X_2)$ is one of the quadratic expressions guaranteed to be non-negative by the PSD property of $k$, or is approximated by such expressions.

In greater detail: Since the finitely supported probability measures are dense in the space of all probability measures on $\mathbb R$, in the weak topology, there exists a sequence of finitely supported probability measures $P_n$ converging weakly to the probability distribution of $X_1$. Since $k$ is continuous and bounded, we have $$\mathbb E k(X_1,X_2) = \lim_n \iint k(u,v) P_n(du) P_n(dv).$$ Assume $P_n$ assigns measure $p_i$ to $u_i$, for finitely many values of $i$ (I'm suppressing the notation for the dependence on $n$ here), so $P_n = \sum_i p_i \delta_{u_i}$. Then $\iint k(u,v) P_n(du) P_n(dv)=\sum_{i,j} p_i p_j k(u_i,u_j);$ this latter quantity is known to the non-negative by the positive definiteness of $k$. So $\mathbb E k(X_1,X_2)$ is the limit of non-negative quantities, so is also non-negative.

Another way of using the integral representation (1) above is to notice that $$\mathbb E k(X_1,X_2)= \int_0^\infty \mathbb E(X_1\exp(-tX_1^2))\, \mathbb E(X_1\exp(-tX_2^2))\,dx = \int_0^\infty \left(\mathbb E X_1\exp(-t X_1^2)\right)^2\,dt\ge 0.$$ One needs to use something like the Tonelli theorem to justify the equation here.