These operations are being performed on likelihoods rather than probabilities. Although the distinction may be subtle, you identified a crucial aspect of it: the product of two densities is (almost) never a density. (See the comment thread for a discussion of why "almost" is required.)
The language in the blog hints at this--but at the same time gets it subtly wrong--so let's analyze it:
The mean of this distribution is the configuration for which both estimates are most likely, and is therefore the best guess of the true configuration given all the information we have.
We have already observed the product is not a distribution. (Although it could be turned into one via multiplication by a suitable number, that's not what's going on here.)
The words "estimates" and "best guess" indicate that this machinery is being used to estimate a parameter--in this case, the "true configuration" (x,y coordinates).
Unfortunately, the mean is not the best guess. The mode is. This is the Maximum Likelihood (ML) Principle.
In order for the blog's explanation to make sense, we have to suppose the following. First, there is a true, definite location. Let's abstractly call it $\mu$. Second, each "sensor" is not reporting $\mu$. Instead it reports a value $X_i$ that is likely to be close to $\mu$. The sensor's "Gaussian" gives the probability density for the distribution of $X_i$. To be very clear, the density for sensor $i$ is a function $f_i$, depending on $\mu$, with the property that for any region $\mathcal{R}$ (in the plane), the chance that the sensor will report a value in $\mathcal{R}$ is
$$\Pr(X_i \in \mathcal{R}) = \int_{\mathcal{R}} f_i(x;\mu) dx.$$
Third, the two sensors are assumed to be operating with physical independence, which is taken to imply statistical independence.
By definition, the likelihood of the two observations $x_1, x_2$ is the probability densities they would have under this joint distribution, given the true location is $\mu$. The independence assumption implies that's the product of the densities. To clarify a subtle point,
The product function that assigns $f_1(x;\mu)f_2(x;\mu)$ to an observation $x$ is not a probability density for $x$; however,
The product $f_1(x_1;\mu)f_2(x_2;\mu)$ is the joint density for the ordered pair $(x_1, x_2)$.
In the posted figure, $x_1$ is the center of one blob, $x_2$ is the center of another, and the points within its space represent possible values of $\mu$. Notice that neither $f_1$ nor $f_2$ is intended to say anything at all about probabilities of $\mu$! $\mu$ is just an unknown fixed value. It's not a random variable.
Here is another subtle twist: the likelihood is considered a function of $\mu$. We have the data--we're just trying to figure out what $\mu$ is likely to be. Thus, what we need to be plotting is the likelihood function
$$\Lambda(\mu) = f_1(x_1;\mu)f_2(x_2;\mu).$$
It is a singular coincidence that this, too, happens to be a Gaussian! The demonstration is revealing. Let's do the math in just one dimension (rather than two or more) to see the pattern--everything generalizes to more dimensions. The logarithm of a Gaussian has the form
$$\log f_i(x_i;\mu) = A_i - B_i(x_i-\mu)^2$$
for constants $A_i$ and $B_i$. Thus the log likelihood is
$$\eqalign{
\log \Lambda(\mu) &= A_1 - B_1(x_1-\mu)^2 + A_2 - B_2(x_2-\mu)^2 \\
&= C - (B_1+B_2)\left(\mu - \frac{B_1x_1+B_2x_2}{B_1+B_2}\right)^2
}$$
where $C$ does not depend on $\mu$. This is the log of a Gaussian where the role of the $x_i$ has been replaced by that weighted mean shown in the fraction.
Let's return to the main thread. The ML estimate of $\mu$ is that value which maximizes the likelihood. Equivalently, it maximizes this Gaussian we just derived from the product of the Gaussians. By definition, the maximum is a mode. It is coincidence--resulting from the point symmetry of each Gaussian around its center--that the mode happens to coincide with the mean.
This analysis has revealed that several coincidences in the particular situation have obscured the underlying concepts:
a multivariate (joint) distribution was easily confused with a univariate distribution (which it is not);
the likelihood looked like a probability distribution (which it is not);
the product of Gaussians happens to be Gaussian (a regularity which is not generally true when sensors vary in non-Gaussian ways);
and their mode happens to coincide with their mean (which is guaranteed only for sensors with symmetric responses around the true values).
Only by focusing on these concepts and stripping away the coincidental behaviors can we see what's really going on.
Multiply by $\left(\frac{2(7)}{3+7}\right)^{1/3} = 1.1187$
More generally, suppose that player $A$ rolls $n$ times and player $B$ rolls $m$ times (without loss of generality, we assume $m \geq n$). As others have already noted, the (unscaled) score of player $A$ is
$$X \sim Beta(n, 1)$$
and the score of player $B$ is
$$Y \sim Beta(m, 1)$$
with $X$ and $Y$ independent. Thus, the joint distribution of $X$ and $Y$ is
$$f_{XY}(x, y) = nmx^{n-1}y^{m-1}, \ 0 < x, y < 1.$$
The goal is to find a constant $c$ such that
$$P(Y \geq cX) = \frac{1}{2}$$.
This probability can be found in terms of $c$, $n$ and $m$ as follows.
\begin{align*}
P(Y \geq cX) &= \int_0^{1/c}\int_{cx}^1 nmx^{n-1}y^{m-1}dydx \\[1.5ex] &= \cdots \\[1.5ex]
&= c^{-n}\left\{\frac{m}{n+m} \right\}
\end{align*}
Setting this equal to $1/2$ and solving for $c$ yields
$$c = \left(\frac{2m}{n+m}\right)^{1/n}.$$
Best Answer
I can't make sense of any of the statements in the question, but you're asking for terminology and that request at least I can understand and appreciate, so here's some to get you started.
I will italicize technical terms and key concepts to draw them to your attention.
The upward face of a die after it is thrown or the side of a coin that shows after it is flipped are outcomes. The assignment of numbers to outcomes, such as labeling the faces of a die with 1, 2, ..., 6 or the faces of a coin 0 and 1, is called a random variable. A range (or, generally, a set) of values for which we would like to know a probability is called an event.
With two dice you have two random quantities and are using two random variables to analyze them. This is a multivariate setting. Now the outcomes consist of ordered pairs: one outcome for the first variable, one outcome for the second. These outcomes have joint probabilities. The most fundamental issue concerns whether the outcomes of one are somehow connected with the outcomes of the other. When they are not, the two variables are independent. Throws of two dice are usually physically unrelated, so we expect them to be independent. If you are using dice as a model for some other phenomenon, though, watch out! You need to determine whether it is appropriate to treat your variables as independent.
With two independent variables $X$ and $Y$, the probabilities multiply. Specifically, let $E$ be an event for $X$ (that is, a set of numbers that $X$ might attain) and let $F$ be an event for $Y$. Then the joint probability of $E$ and $F$, meaning the probability that $X$ has a value in $E$ (written $\Pr_X[E]$) and $Y$ has a value in $F$, equals the product of the probabilities, $\Pr_X[E] \quad \Pr_Y[F]$. When the variables are not independent, the joint probability of $E$ and $F$ may differ (considerably) from this product.
It sounds to me like you are asking about either the theoretical computation of joint probabilities or about how to estimate joint probabilities from observations.
An excellent place to get up to speed quickly on all this stuff, as well as sort out what t-tests really do, what probability distributions are, and what "gaussian" really means, is to read through Gonick & Smith's A Cartoon Guide to Statistics (seriously!).