Solved – Probability for Fair Dice/Coin According to Bayesian Theorem

bayesianconditional probability

Going through some discussion on the classic dice roll or coin toss sequence. According to traditional probability theories, there is no connection between not rolling a 6 on the first dice roll, and getting a 6 on the next roll. The probability will be the same – 1/6. Each event is classed as being independent.

However, using coins as an easier example, when I look at Bayesian theory, if you had 99 coin tosses with heads arising, there would be some form of recalculation and the odds would change. It wouldn't be a 50-50 chance any more. I would like someone to explain exactly what the calculation would be and how you would come to it.

On a related issue, when I read about the Law of Large Numbers I see that there would be an 'expectation' that in my above example a tails toss would arise to return to the average. In many ways I can't see how you would disentangle this from falling into the Gambler's Fallacy whereby the gambler expects tails after so many heads in one long sequence.. The explanation seems to be that the Law of Large Numbers only works for large sets of trials but what exactly is a large set of trials? Would it be for 50 coin tosses? 5000 coin tosses? It seems a grey area to me.

All comments welcome!

Best Answer

Probability of throwing "6" on fair dice is $1/6$ and this has nothing to do with Bayes theorem. Bayes theorem is used to update our knowledge about conditional probabilities. If you are learning about properties of a die by observing it's outcomes, then you'll update your knowledge while observing new data, so the estimated probabilities will change. However the die itself does not change by observing it and calculating probabilities so the true probability does not change.

Your second question is answered in Does 10 heads in a row increase the chance of the next toss being a tail?

Related Question