Conditional differential entropy of $x+y$ given $x+z$

conditional probabilityinformation theory

According to several textbooks, the conditonal differential entropy is defined as follows: $h(m|n)=\int\int -f(m,n)\log\left(f(m|n)\right)\,dm\, dn$, where $f(m,n)$ and $f(m|n)$ are the joint and conditional distributions.

Let $m = x+y$ and $n = x+z$, where $y\sim \mathcal{N}\left(0,N_0\right)$ and $z\sim \mathcal{N}\left(0,N_1\right)$. Suppose the distribution of $x$ is not known but it has the constraint of $\mathbb{E}\left[|x|^2\right]=P$, then how can one obtain a tight bound on $h(m|n)$?

In most of textbooks, I found that the following example. If $M = X+Y$ and $N=X$, then
$$\begin{aligned}
h(M|N) & = h(X+Y|X)\\
& = h(Y)\\
& = \frac{1}{2}\log(2\pi e N_0)
\end{aligned}$$

However, I find the former non-trivial. Mainly, the distribution of x is not known. Thus, I am not sure what is the joint distribution, $f(m,n)$, and the conditional distribution, $f(m|n)$. What I know is $m,n$ belongs to the same class of distribution since they are the convolution of a Gaussian distribution and an unknown distribution x and they are correlated by $x$.

Update:

I found a close reference. Suppose $M=Hx+Y$ and $N=Gx+Y$, where $H,G$ are some arbitrary complex constants and $Y\sim CN(0,\gamma)$, then $h(M|N)\leq\log\left(\pi e\left(\gamma+\frac{H^{2}\gamma P}{\gamma+G^{2}P}\right)\right)$. This follows the fact that circularly symmetric complex Gaussian distribution maximizes the conditional differential entropy for a given covariance constraint. But my question is how can one obtain the expression $\left(\gamma+\frac{H^{2}\gamma P}{\gamma+G^{2}P}\right)$?

Best Answer

I have managed to derive the answer to my question. The derivation is as follows: $$\begin{aligned} h\left(m|n\right) &= h(x+y|x+z)\\ &\leq h(x+y-\alpha(x+z))\\ &= h(y-x(1-\alpha)-\alpha z))\\ &\leq \frac{1}{2}\log \left( 2 \pi e (N_0+P(1-\alpha^2)+\alpha^2N_1) \right)\\ &\leq \frac{1}{2}\log \left( 2 \pi e (N_0+\frac{PN_1}{P+N1}) \right) \end{aligned}$$ where $\alpha=\frac{P}{P+N_1}$.

Related Question