Solved – How to find the conditional distribution of gaussian from covariance matrix

conditional probabilitygraphical-modelnormal distribution

I know that the conditional distribution of two gaussian is gaussian. But in the following statement how does the Θ captures the conditional distributions?

And what do they mean by the term "captures" here?

Let's suppose X is a random variable such that X
= ( X1 , X2 , . . . , Xp ) has a multivariate Gaussian distribution with mean-vector 0 (for convenience ) and covariance Σ , then Θ
−1 captures the conditional distributions of each Xj given the rest.

Best Answer

What is meant is simply that for Gaussian variables: dependency = linear dependency. In other words, all the information on the dependency between two Gaussians is in their covariance or correlation.

In turn, if you know the covariance matrix, then you have the variances and correlations between each element $X_i$ and the others: $\rho(X_j, X_j), j \neq i$.

Last, if you have centered gaussians and you know their variances and correlations, you can write them using independent Gaussians which will give you the expression of the conditional value of one given the others.

Let's take two standard guassians for example $(X_1, X_2)$ with a correlation $\rho$. If you want the distribution of $X_2 | X_1$ for example, you have to:

  1. write $X_2 = \rho X_1 + \sqrt{1 - \rho^2} X_0$ , with $X_0$ a standard gaussian independant from $X_1$.

  2. conclude that $X_2 | X_1$ is gaussian with mean $\rho X_1$ and variance $1 - \rho^2$.

This extends to an inversion of Cholesky's decomposition (a.k.a. square root matrix of the covariance matrix) in higher dimensions.

Related Question