Solved – Why do the probability distributions multiply here

normal distributionprobability

Let $X$ be for example your number of days remaining to live. A doctor 1 evaluates the distribution of $X$ as a Gaussian: $P(X)\sim\mathcal{N}(\mu_1,\sigma_1)$. Another independent doctor 2 evaluates $P(X)\sim\mathcal{N}(\mu_2,\sigma_2)$. Both doctors are equally reliable. How to combine both information?

In this blog article, the author says that

If we have two probabilities and we want to know the chance that both are true, we just multiply them together. So, we take the two Gaussian blobs and multiply them:
enter image description here

Edit Most people (I first asked this question on math.SE) have answered that this is the trivial independency relation $P(A\cap B)=P(A)P(B)$ but I am still having difficulty understanding what would $A$ and $B$ be in this context: probably not events such as "the dice will give a 3" or "the patient is sick". Also, there is probably something more, because the product of two densities is not a probability density since in general $\int_\mathbb{R} P(x)^2 \neq 1$. So it's probably not as simple as that.

Let's take another example. An expert 1 tells you that a dice is perfectly balanced. Another expert 2 tells you, independently the same. Then the probability of the dice giving a 3 is certainly not $1/6^2$.

Best Answer

These operations are being performed on likelihoods rather than probabilities. Although the distinction may be subtle, you identified a crucial aspect of it: the product of two densities is (almost) never a density. (See the comment thread for a discussion of why "almost" is required.)

The language in the blog hints at this--but at the same time gets it subtly wrong--so let's analyze it:

The mean of this distribution is the configuration for which both estimates are most likely, and is therefore the best guess of the true configuration given all the information we have.

  1. We have already observed the product is not a distribution. (Although it could be turned into one via multiplication by a suitable number, that's not what's going on here.)

  2. The words "estimates" and "best guess" indicate that this machinery is being used to estimate a parameter--in this case, the "true configuration" (x,y coordinates).

  3. Unfortunately, the mean is not the best guess. The mode is. This is the Maximum Likelihood (ML) Principle.

In order for the blog's explanation to make sense, we have to suppose the following. First, there is a true, definite location. Let's abstractly call it $\mu$. Second, each "sensor" is not reporting $\mu$. Instead it reports a value $X_i$ that is likely to be close to $\mu$. The sensor's "Gaussian" gives the probability density for the distribution of $X_i$. To be very clear, the density for sensor $i$ is a function $f_i$, depending on $\mu$, with the property that for any region $\mathcal{R}$ (in the plane), the chance that the sensor will report a value in $\mathcal{R}$ is

$$\Pr(X_i \in \mathcal{R}) = \int_{\mathcal{R}} f_i(x;\mu) dx.$$

Third, the two sensors are assumed to be operating with physical independence, which is taken to imply statistical independence.

By definition, the likelihood of the two observations $x_1, x_2$ is the probability densities they would have under this joint distribution, given the true location is $\mu$. The independence assumption implies that's the product of the densities. To clarify a subtle point,

  1. The product function that assigns $f_1(x;\mu)f_2(x;\mu)$ to an observation $x$ is not a probability density for $x$; however,

  2. The product $f_1(x_1;\mu)f_2(x_2;\mu)$ is the joint density for the ordered pair $(x_1, x_2)$.

In the posted figure, $x_1$ is the center of one blob, $x_2$ is the center of another, and the points within its space represent possible values of $\mu$. Notice that neither $f_1$ nor $f_2$ is intended to say anything at all about probabilities of $\mu$! $\mu$ is just an unknown fixed value. It's not a random variable.

Here is another subtle twist: the likelihood is considered a function of $\mu$. We have the data--we're just trying to figure out what $\mu$ is likely to be. Thus, what we need to be plotting is the likelihood function

$$\Lambda(\mu) = f_1(x_1;\mu)f_2(x_2;\mu).$$

It is a singular coincidence that this, too, happens to be a Gaussian! The demonstration is revealing. Let's do the math in just one dimension (rather than two or more) to see the pattern--everything generalizes to more dimensions. The logarithm of a Gaussian has the form

$$\log f_i(x_i;\mu) = A_i - B_i(x_i-\mu)^2$$

for constants $A_i$ and $B_i$. Thus the log likelihood is

$$\eqalign{ \log \Lambda(\mu) &= A_1 - B_1(x_1-\mu)^2 + A_2 - B_2(x_2-\mu)^2 \\ &= C - (B_1+B_2)\left(\mu - \frac{B_1x_1+B_2x_2}{B_1+B_2}\right)^2 }$$

where $C$ does not depend on $\mu$. This is the log of a Gaussian where the role of the $x_i$ has been replaced by that weighted mean shown in the fraction.

Let's return to the main thread. The ML estimate of $\mu$ is that value which maximizes the likelihood. Equivalently, it maximizes this Gaussian we just derived from the product of the Gaussians. By definition, the maximum is a mode. It is coincidence--resulting from the point symmetry of each Gaussian around its center--that the mode happens to coincide with the mean.


This analysis has revealed that several coincidences in the particular situation have obscured the underlying concepts:

  • a multivariate (joint) distribution was easily confused with a univariate distribution (which it is not);

  • the likelihood looked like a probability distribution (which it is not);

  • the product of Gaussians happens to be Gaussian (a regularity which is not generally true when sensors vary in non-Gaussian ways);

  • and their mode happens to coincide with their mean (which is guaranteed only for sensors with symmetric responses around the true values).

Only by focusing on these concepts and stripping away the coincidental behaviors can we see what's really going on.

Related Question