Wald Tests – Are t-Test and One-Way ANOVA Both Wald Tests?

anovahypothesis testing

t-test for testing whether the mean of a normally distributed sample equals a constant is said to be a Wald test, by estimating the standard deviation of the sample mean by the Fisher information of the normal distribution at the sample mean. But the test statistic in the $t$ test has Student's $t$ distribution, while the test statistic in a Wald test asymptotically has a $\chi^{2}$ distribution. I wonder how to explain that?

In one-way ANOVA, the test statistic is defined as the ratio between between-class variance and within-class variance. I was wondering if it is also a Wald test? But the test statistic in one-way ANOVA has a $F$ distribution, and the test statistic in a Wald test asymptotically has a $\chi^{2}$ distribution. I wonder how to explain that?

Best Answer

Consider the following setup. We have a $p$-dimensional parameter vector $\theta$ that specifies the model completely and a maximum-likelihood estimator $\hat{\theta}$. The Fisher information in $\theta$ is denoted $I(\theta)$. What is usually referred to as the Wald statistic is

$$(\hat{\theta} - \theta)^T I(\hat{\theta}) (\hat{\theta} - \theta)$$

where $I(\hat{\theta})$ is the Fisher information evaluated in the maximum-likelihood estimator. Under regularity conditions the Wald statistic follows asymptotically a $\chi^2$-distribution with $p$-degrees of freedom when $\theta$ is the true parameter. The Wald statistic can be used to test a simple hypothesis $H_0 : \theta = \theta_0$ on the entire parameter vector.

With $\Sigma(\theta) = I(\theta)^{-1}$ the inverse Fisher information the Wald test statistic of the hypothesis $H_0 : \theta_1 = \theta_{0,1}$ is $$\frac{(\hat{\theta}_1 - \theta_{0,1})^2}{\Sigma(\hat{\theta})_{ii}}.$$ Its asymptotic distribution is a $\chi^2$-distribution with 1 degrees of freedom.

For the normal model where $\theta = (\mu, \sigma^2)$ is the vector of the mean and the variance parameters, the Wald test statistic of testing if $\mu = \mu_0$ is $$\frac{n(\hat{\mu} - \mu_0)^2}{\hat{\sigma}^2}$$ with $n$ the sample size. Here $\hat{\sigma}^2$ is the maximum-likelihood estimator of $\sigma^2$ (where you divide by $n$). The $t$-test statistic is $$\frac{\sqrt{n}(\hat{\mu} - \mu_0)}{s}$$ where $s^2$ is the unbiased estimator of the variance (where you divide by the $n-1$). The Wald test statistic is almost but not exactly equal to the square of the $t$-test statistic, but they are asymptotically equivalent when $n \to \infty$. The squared $t$-test statistic has an exact $F(1, n-1)$-distribution, which converges to the $\chi^2$-distribution with 1 degrees of freedom for $n \to \infty$.

The same story holds regarding the $F$-test in one-way ANOVA.

Related Question