In certain board game, two fair dice are rolled. If a pair of the same number is obtained, the dice are rolled for the second time. The player continues rolling those two dice and he stops when the numbers turn up on the dice are different. Evaluate the probability that a player obtains a total of seven points when he rolls the dice as described.
There are apparently two interpretations of this statement, and I will discuss the probabilities associated with each. It is not made perfectly clear from the question how exactly the points are calculated.
- Possibility 1: The points are just counted from the final roll and no other rolls matter.
E.g. The sequence of rolls $(1,1),(1,1),(1,2)$ would end in a failure since the "total" here would be $1+2=3$ whereas the sequence of rolls $(1,1),(3,4)$ would be considered a success since the final roll $(3,4)$ has $3+4=7$.
Under this interpretation, we may use a conditional probability argument to convince ourselves that we may look instead at the restricted sample space $\{(1,2),(1,3),\dots,(2,1),(2,3),\dots,(6,4),(6,5)\}$ which has only $30$ members, six of which correspond to success. This would have the probability be then $\frac{6}{30}=0.2$
If it is hard to convince yourself of this, then you may approach in a more tedious way: letting $p$ be the probability of rolling doubles on a specific roll and $q$ the probability of success on a specific roll, i.e. $p=q=1/6$, we would have the probability as $=q+pq+p^2q+p^3q+\dots+p^nq+\dots = \frac{1}{6}+\frac{1}{6^2}+\frac{1}{6^3}+\dots = \frac{1/6}{1-1/6}=0.2$, same answer as before
- Possibility 2: The points are counted from all rolls combined
E.g. the sequence of rolls $(1,1),(1,1),(1,2)$ would end in a success this time since the "total" here would be $1+1+1+1+1+2=7$ whereas the sequence of rolls $(1,1),(3,4)$ would be considered a failure since the total here is $1+1+3+4=9$.
Under this interpretation, we use a great deal more casework. Fortunately, the desired total is small and we are guaranteed to have won or lost by the third roll.
The possibilities:
- Win on the first roll
- Win on the second roll after having rolled (1,1)
- Win on the second roll after having rolled (2,2)
- Win on the third roll after having rolled (1,1) twice in a row
Convince yourself that there are no other possibilities. We could not for example win after having rolled $(3,3)$ on the first turn because the total would be $3+3+\dots+a+b>7$.
So then, we continue by counting probabilities and combining for a final total probability of:
$$\frac{1}{6}+\frac{1}{36}\cdot\frac{4}{36} + \frac{1}{36}\cdot\frac{2}{36}+\frac{1}{36}\cdot\frac{1}{36}\cdot\frac{2}{36} = \frac{3997}{23328}$$
The terms in the above are calculated directly via multiplication principle and direct counting. For example, the final term corresponds to rolling a (1,1) followed by a (1,1) followed by something which adds to three (noting that no combination that adds to three will result in a pair).
There's no difference between the two procedures (throwing simultaneously or one-by-one) as far as independence is concerned. To see this you might go back to the formal definition of independence.
Definition. (Statistical) independence of two events.
Two events $A$ and $B$ are independent if their joint probability equals the product of their respective probabilities, i.e. $$P(A \cap B)=P(A)P(B)$$
As we see, it's NOT like independence (statistical independence) implies the product rule, rather the concept of statistical independence is defined by the product rule.
It is easy to check from the joint probability distribution that throwing of two dices are statistically independent.
Assume that the die is fair $($i.e. each of the sides come up with equal probability of $\frac{1}{6})$. If we define $X$ to be the number we get from the $1$st die and $Y$ to be the same from $2$nd die, then
$$P(X=i, Y=j)=\frac{1}{36}=\frac{1}{6} \cdot \frac{1}{6} = P(X=i)P(X=j),$$
for $i=1(1)6, ~j=1(1)6$.
We can use this to compute $P(X \in A, ~Y \in B)$ which comes out to be $P(X \in A)P(Y \in B)$.
Now check! Do you think that the joint distribution of $(X, Y)$ changes because the dice are thrown together or one-by-one? In fact, after the experiment is done, and the outcome is attached to a random variable which has a known probability distribution, does the physical (not mathematical!) procedure of the experiment matter at all?
Lastly, I should (informally) say that, in a discussion of statistics and probability, we only care about statistical independence, which is well-defined. In Philosophy, the notion of independence can be way too complicated for mathematicians to handle, so we don't bother about that!
Best Answer
(SIde note: The events are neither mutually exclusive nor independent.)
$P(E\cap F)$ is the probability that one die is a six and the other die is not. That's $\frac 1 6\frac 5 6+\frac 5 6\frac 1 6$ by adding the probability that the first die is a six and the other not, to the probability that the first die is not a six and the other is. (NB: Those events are mutually exclusive partitions of $E\cap F$.)
$$\mathsf P(E\cap F) = 2 \cdot \frac 1 6 \cdot \frac 5 6\\ = \frac{10}{36}$$
Then we just use conditional probability as you noted.
$$\mathsf P(E\mid F) = \frac{\mathsf P(E\cap F)}{\mathsf P(F)}\\ = \frac{10/36}{30/36} \\ = \frac {1}{3}$$