Probability – Probability Density Function for White Gaussian Noise

probabilitywhite noise

in many signal processing text books and lectures we find that if we assume that the noise is white Gaussian then the probability density function itself takes the Gaussian form (see here for example) when trying to estimate parameters through the maximum-likelihood estimation method.

I do not understand this leap, why just because the noise is Gaussian the parameters themselves are Gaussian distributed parameters? I do not see how the white Gaussian noise fits into the probability density function at all! It seems we are always just guessing that the probability density function is normally distributed. Am I wrong? Or can anyone help me understand this or point me in a direction that does? Thank you very much.

Best Answer

As specified in the comments:

what I do not understand is how a linear model with Gaussian noise produces Gaussian data

This is because the family of normal distributions is closed under linear transformations: simply put, once you've got a normally distributed random variable, you can't make it not normal by addition or multiplication with scalars. Let $X \sim \mathcal{N}(0, 1)$. Then for any constants $a,b$: $$ a X + b = Y \sim \mathcal{N}(b, a^2)$$ In the stochastic process setting, this $Y$ is the data, $X$ is noise, and $b$ is defined by the fixed effects (what's sometimes called the DC offset in DSP, or intercept if this was a basic regression model). Apply the above equation and you'll get the needed distribution of $Y$.

Related Question