Solved – Why is the concept of Type 1 error incompatible with Bayesianism

bayesianhypothesis testing

I've read in various places (see e.g. the comments on this question) that the concept of Type 1 error is incompatible with Bayesian paradigms for hypothesis testing.

Why is that exactly? I can't seem to put the different pieces and definitions of what "Bayesian hypothesis testing" actually means in order to see how Type 1 error doesn't make sense in that paradigm.

EDIT: one of the answers below seems to imply that Bayesian methods do not concern themselves with whether a hypothesis is true or false, i.e. that Bayesians only deal with assigning probabilities to hypotheses, not taking actions.

But then what about Bayes' rules (i.e. decision rules that minimize the Bayes risk)? Bayes' rules still result in rejection (or non-rejection) of the null hypothesis. So it fair to say that making any decision based on a Bayes' rule is non-Bayesian at some level? I'm clearly misunderstanding something here.

Best Answer

The Bayesian approach is based on determining the probability of a hypothesis with a model using an "a priori" probability that is then updated based on data. On the contrary, the classical hypothesis testing does not admit assigning a probability to the null hypothesis, but just either accepting or refusing it. The error-I type is the probability of wrongly refusing the null hypothesis when it is true. Thus, it is something completely different from the Bayesian logic (since probability is referred to making a mistake, not to the hypothesis itself).

EDIT: I stressed the fact the Bayesian approach is based on assigning a probability to a hypothesis, because this is a crucial difference wrt the classical approach, that mantains parameters are "assigned by Nature", thus not random variables, so you can't make probability statements directly on them. However, after you get your a posteriori probability, then of course you can take action, either by chosing the hypothesis with higher probability, or the one minimizing a given cost function. See, for example, here: https://www.probabilitycourse.com/chapter9/9_1_8_bayesian_hypothesis_testing.php

To sum up, I'd say the difference is: "a posteriori probability vs p-value", not "making vs not making a decision".

Related Question