The connection between expected value (X) of a dice roll and predicting the odds of “X+0.5 or more” at least 50% of the time

diceprobability

I always knew that with standard dice rolls (1d6,2d8,3d12,etc) we could average the minimum and and maximum value and get the expected value and know that you can roll at least the expected value 50% of the time, or if the value wasn't an integer, just take the next discrete value above it (like 4+ for 1d6 with expected value 3.5).

But I was playing with some custom dice pools in anydice, where you roll a pool of d10 and every 6+ is a hit and every 10 is 2 hits, and noticed something odd: if I take the expected value and add 0.5 to it, I can "predict" the odds of reaching that number rounded down.

For example:

  • Rolling 1d10 the expected value is 0.6, adding .5 I get 1.1, which means that I get at least 1 hit around 50% of time, since I'm a little bit over 1.
  • 2d10 has EV of 1.2 which gets me 1.7, which means that I get at least 1 hit quite a lot (actually 75%) since I'm a lot over 1.
  • 6d10 has EV of 3.6 which gets me 4.1, which means that I get at least 4 close to 50% of the time.
  • 8d10 has EV of 4.8 which gets me 5.3, which means that i get at least 5 a bit over 50% of the time (actually 55%) since I'm a bit over 5.

I understand the intuition of why that works, and I know it must be related to how close the expected value is from actual discrete numbers, but I would like to know if there is some connection/explanation to that kind of prediction.

The relevant anydice for my custom dice pool: https://anydice.com/program/1efbc

Best Answer

I hope this gets at your question, please let me know if not.

There are two separate problems at play as I understand it. There's this operation of adding 0.5 and rounding down (a.k.a. flooring) and there's an interesting fact about symmetric distributions.

First of all, the sequence of operations of adding 0.5 to a number and then rounding down is equivalent to rounding to the nearest integer for positive numbers (breaking ties by rounding up.).

Second, the distribution you're working with is symmetric. The distribution for any fair die $dN$ is uniform, and thus symmetric, and the sum of symmetrically distributed random variables is also symmetrically distributed. With symmetric distributions, the expected value coincides with the median. This means that with probability $1/2$, the outcome is greater than or equal to the expected value.

With dice, the expected value can be a rational (non-integer) number despite all the outcomes being integers. What this means in your situation is that we can round the expected value (using the `add 0.5 and floor' method) and claim that with probability one half the outcome will be at least that big.

This would be the end of my answer if we were just talking about the faces of the dice representing themselves. However, you mentioned a game where the faces get mapped to some hit numbers, $1,2,3,4,5$ are $0$ hits, $6,7,8,9$ are $1$ hit, and $10$ is 2 hits. We can think of this as a discrete random variable $X$ where $P(X=0)=5/10$, $P(X=1)=4/10$ and $P(X=2)=1/10$. This is not a symmetric distribution. The explanation for why it still works in the $kd10$ examples you simulated is that for the single case $1d10$ the expected value (i.e., mean) and the median differ by $0.1<0.5$. You can check to see what the difference between the median and the expected value is for other $k$ in your $kd10$ experiments.