If $f$ and $g$ are bivariate normal PDFs having correlation coefficients $ρ_f$ and $ρ_g$ respectively, what is the correlation coefficient of the bivariate normal distribution $h=f*g$, where $*$ denotes the convolution operator? I've tried searching for the answer but come up dry.
Correlation coefficient of convolution of two bivariate normal distributions
convolutionprobability distributions
Related Solutions
To sum up what transpires from the question and from some comments, it seems that the question is as follows:
Let $(X_1,X_2)$ denote a centered gaussian vector with covariance matrix $C=\begin{pmatrix}1 & p\\ p & 1\end{pmatrix}$. Then,$$ \mathbb P(X_1\leqslant a,X_2\leqslant b)=\mathbb P(X_1\leqslant b,X_2\leqslant a). $$
To show this, note that the matrix $C$ stays the same when one exchanges the columns and one exchanges the rows, hence $(X_2,X_1)$ is distributed like $(X_1,X_2)$ (this uses the fact that the distribution of a gaussian vector is entirely determined by the vector of its means and by its covariance matrix). In particular, $$ \mathbb P((X_2,X_1)\in(-\infty,a]\times(-\infty,b])=\mathbb P((X_1,X_2)\in(-\infty,a]\times(-\infty,b]), $$ which is the desired result.
I think I have found the discrepancy.The short answer is that the distribution for case (2) is a doubly noncentral beta distribution. The bulk correlation is the same under each case.
In the solution below, I've used slightly different notation. In particular, $\boldsymbol{s }$ $\rightarrow$ $\boldsymbol{w}$. I've also used $\hat{A}$ to describe the maximum liklehood estimate for the amplitude of the sample vector $\boldsymbol{w}$ that maximally correlates with $\boldsymbol{x}$. This notation and formulation is targeted at detection theory applications.
The relative square error in approximating approximating a data stream $ \boldsymbol{x} $ using a waveform template $\boldsymbol{w }$ is the ratio of the least-squares error $\vert\vert \boldsymbol{e} \vert\vert^{2}$ to measure signal energy $ \vert \vert \boldsymbol{x} \vert \vert^{2}$. This error may be re-written as a ratio of quadratic forms that has a doubly noncentral $\beta$ distribution:
\begin{equation} \begin{split} \cfrac{\vert \vert \boldsymbol{e} \vert \vert^{2}}{\vert \vert \boldsymbol{x} \vert \vert^{2}} &= \cfrac{\vert \vert \boldsymbol{x} - \hat{A} \boldsymbol{w } \vert \vert^{2} }{ \vert \vert \boldsymbol{x} \vert \vert^{2}} \\ &= \cfrac{\vert \vert \boldsymbol{x} - \cfrac{\langle \boldsymbol{x},\, \boldsymbol{w} \rangle}{ \vert \vert \boldsymbol{w } \vert \vert^{2}} \boldsymbol{w } \vert \vert^{2}} {\vert \vert \boldsymbol{x} \vert \vert^{2}} \\ &= \cfrac{ \vert \vert P_{\boldsymbol{w}}^{\perp} \left( \boldsymbol{x} \right) \vert \vert ^{2}} { \vert \vert P_{\boldsymbol{w}}^{\perp}\left( \boldsymbol{x} \right) \vert \vert ^{2} + \vert \vert P_{\boldsymbol{w}} \left( \boldsymbol{x} \right) \vert \vert ^{2}} \\ &\overset{d}{=} \cfrac{ \chi_{1}^{2}( \lambda^{\perp} )} { \chi_{1}^{2}( \lambda ) + \chi_{N_{E} - 1}^{2}( \lambda^{\perp} ) }, \end{split} \end{equation}
where the noncentrality parameters are defined by $\lambda$ $=$ $\cfrac{\vert \vert P_{\boldsymbol{w}} \left( \boldsymbol{x} \right) \vert \vert ^{2}}{\sigma^{2}}$ and $\lambda^{\perp}$ $=$ $\cfrac{\vert \vert P_{\boldsymbol{w}}^{\perp} \left( \boldsymbol{x} \right) \vert \vert ^{2}}{\sigma^{2}}$, and where $\overset{d}{=}$ indicates distributional equality. This ratio is also related to the sample correlation coefficient $r$:
\begin{equation} \begin{split} \cfrac{\vert \vert \boldsymbol{x} - \cfrac{\langle \boldsymbol{x},\, \boldsymbol{w} \rangle}{ \vert \vert \boldsymbol{w } \vert \vert^{2}} \boldsymbol{w } \vert \vert^{2}} {\vert \vert \boldsymbol{x} \vert \vert^{2}} &= 1- \cfrac{\langle \boldsymbol{x},\, \boldsymbol{w} \rangle^{2} }{ \vert \vert \boldsymbol{w } \vert \vert^{2} \vert \vert \boldsymbol{x } \vert \vert^{2}} \\ &= 1 - r^{2} \end{split} \end{equation}
Therefore:
\begin{equation} \begin{split} r^{2} &\overset{d}{=} \cfrac{ \chi_{1}^{2}( \lambda )} { \chi_{1}^{2}( \lambda ) + \chi_{N_{E} - 1}^{2}( \lambda^{\perp} ) } \\ &\sim \text{Beta} \left( \frac{1}{2}, \frac{1}{2}N_{E} ; \lambda, \lambda^{\perp} \right) \end{split} \end{equation}
Distinct hypotheses regarding the distribution for $\boldsymbol{x}$ simplify the form for this distribution. When the data stream contains only noise, the hypothesis $\mathcal{H}_{0}$ is satisfied and $r^{2}$ has a central Beta distribution, where $\lambda^{\perp}$ $=$ $\lambda$ $=$ $0$. In the presence of signal, a data stream $\boldsymbol{x}$ will generally have a non-zero projection $P_{\boldsymbol{w}}^{\perp}\left( \boldsymbol{x} \right)$ orthogonal to the noise-contaminated template data vector $\boldsymbol{w}$. In this case $\lambda$, $\lambda^{\perp}$ $\ne$ $0$, and $r^{2}$ has doubly noncentral Beta distribution. If the template signal has a very large SNR, then $ \boldsymbol{x}$ $\cong$ $A \boldsymbol{w}$ $+$ $\boldsymbol{n}$, $\lambda^{\perp}$ $=$ $0$, and $r^{2}$ is reasonably approximated by a noncentral Beta distribution. The noncentral Beta distribution therefore provides an absolute upper bound on the detection performance of a correlation detector.
Best Answer
I don't think that you can determine the correlation coefficient without also knowing the variances, but you should be able to determine the new covariance matrix given the covariance matrices for $f$ and $g$. It can be shown that the distribution with pdf $h=f*g$ corresponds to the the distribution of $Z=X+Y$, where $X,Y$ are independent and have the respective densities $f$ and $g$. In particular if $X,Y$ are independent with $X\sim N(\mu_X,\Sigma_X)$ and $Y\sim N(\mu_Y,\Sigma_Y)$, then $$X+Y \sim N_2(\mu_X+\mu_Y,\Sigma_X + \Sigma_Y),$$ and the correlation coefficient can then be determined from the matrix $\Sigma_X + \Sigma_Y$ as $$\rho_Z = \frac{\rho_X \sigma_{X_1} \sigma_{X_2}+\rho_Y \sigma_{Y_1}\sigma_{Y_2}} {\sqrt{\sigma_{X_1}^2+\sigma_{Y_1}^2} \sqrt{\sigma_{X_2}^2 + \sigma_{Y_2}^2 }},$$ where $$\Sigma_X = \begin{pmatrix} \sigma_{X_1}^2 & \rho_X \sigma_{X_1} \sigma_{X_2} \\ \rho_X \sigma_{X_1} \sigma_{X_2} & \sigma_{X_2}^2\end{pmatrix} \quad \text{ and } \quad \Sigma_Y = \begin{pmatrix} \sigma_{Y_1}^2 & \rho_Y \sigma_{Y_1} \sigma_{Y_2} \\ \rho_Y \sigma_{Y_1} \sigma_{Y_2} & \sigma_{Y_2}^2\end{pmatrix} $$