Probability Theory – Showing Expectation for Gaussian Random Variables

probability theorystochastic-analysisstochastic-calculusstochastic-processes

I am working an exercise as follows:

Suppose $(X_{0},\cdots, X_{n})$ is a Guassian vector (not necessarily centered). Show that there are constants $c_{0}, c_{1},\cdots, c_{n}$ such that $$\mathbb{E}(X_{0}|X_{1},\cdots,X_{n})=c_{0}+c_{1}X_{1}+\cdots+c_{n}X_{n}.$$

A solution with centered Guassian vector is here: Conditional Expectation of Gaussian Random Vector of length n

How could I modify the proof if it is not zero mean Guassian vector?

Also, this proof uses density function, but we know that the density function of Guassian vector exists if and only if the covariance matrix is non-degenerate (invertible), right?

What will happen if the covariance matrix is degenerate?

Can we still prove the exercise? (This exercise also said that my proof should be valid irrespective of whether the covariance matrix is degenerate or not)…

Thank you so much in advance!

Best Answer

Look at the proof in my answer to the question you linked. It already assumes a non-centered Gaussian vector. As for the case where the covariance matrix might be degenerate, we can always find a matrix $C$ such that $Z:=X_a- C X_b$ is uncorrelated with $X_b$, so that the equality $$\Sigma_{a,b}=C\,\Sigma_{b,b}\tag1$$ holds; in the general case take $$C:=\Sigma_{a,b}\Sigma_{b,b}{}^+,$$ where $\Sigma_{b,b}{}^+$ is the Moore-Penrose inverse of $\Sigma_{b,b}$. (If $\Sigma_{b,b}$ is invertible, then $ \Sigma_{b,b}{}^+$ is the same as $\Sigma_{b,b}^{-1}$.) Continue the proof as before to establish that $$E(X_a\mid X_b) = \mu_a + C(X_b - \mu_b),$$ which is equivalent to the result you're trying to show.


EDIT: Why does it suffice to take the Moore-Penrose inverse of $\Sigma_{b,b}$? Recall that every multivariate Gaussian vector is an affine transformation of some vector $Z$ of independent standard Gaussians. We can then write the subvectors $X_a$ and $X_b$ in the form $X_a = AZ + \mu_a$, $X_b = BZ + \mu_b$, with $A$ and $B$ matrices of constants. Since the covariance matrix of $Z$ is the identity, we calculate $\Sigma_{a,b} = AB^T$ and $\Sigma_{b,b}=BB^T$.

Using properties of Moore-Penrose inverses, we find $\Sigma_{b,b}{}^+=(BB^T)^+=(B^T)^+B^+$ and verify (1): $$ C\,\Sigma_{b,b}=\Sigma_{a,b}\Sigma_{b,b}{}^+\Sigma_{b,b} =AB^T(B^T)^+\underbrace{B^+BB^T}_{B^T} =A\underbrace{B^T(B^T)^+B^T}_{B^T}=AB^T=\Sigma_{a,b}. $$