Struggling to bridge understanding from Probability Theory to Hypothesis Testing Statistics

hypothesis testingprobabilityprobability theorystatistics

I have recently done some Probability Theory and am struggling to come to terms with our new chapter: Statistics. This misunderstanding specifically pertains to hypothesis testing.

Let $(\Omega, \mathcal{F},( P_{\vartheta},\vartheta\in\theta))$ be a statistical model

I know that the general idea of a test $F \in \mathcal{F}$ for $H_{0}\subseteq ( P_{\vartheta},\vartheta\in\theta)$ is determining a significance level $\alpha \in [0,1]$ such that $\sup_{P_{\vartheta} \in H_{0}}P_{\vartheta}(F)\leq \alpha$. We set $\alpha$ appropriately low so that so that

$P_{\vartheta}(F)\leq \alpha, \forall P_{\vartheta} \in H_{0}$ is extremely unlikely.

Next, I get confused by the following:

If we observe $\omega \in F$(!) ($1.$Question: Should it not be $\omega \in \mathcal{F}$?) then we should discard $H_{0}$, otherwise $H_{0}$ is kept.

$2.$ Question: Why are we looking at "singletons" $\omega$?

If we, for example are on a continuous probability space $(\Omega, \mathcal{F},P_{\vartheta})$ then $P(\{w\})=0\leq \alpha$, so no $H_{0}$ would fit. Or are statistical models simply always discrete, and therefore looking at singletons makes sense?

$3.$ I may be missing some key intuition as to the differences between the Probability Theory and Statistics. Any intuitions are greatly appreciated.

Best Answer

Here is a simple example. Let $(\Omega,\mathcal{F})=(\mathbb{R},\mathcal{B}(\mathbb{R}))$ and let $$ P_{\vartheta}(A):=\int_A \vartheta e^{-\vartheta x}1_{[0,\infty)}(x)dx, \quad \vartheta>0. $$ We want to test $H_0:\vartheta\ge 1$ against $H_1:\vartheta<1$. Taking $F=(-\ln(1-\alpha),\infty)$, we get $\sup_{\vartheta\ge 1}P_{\vartheta}(F)=\alpha$. So if we observe an outcome $\omega>-\ln(1-\alpha)$, we reject $H_0$.

Related Question