I have the model
$$
y=x^a \times z^b + e
$$
where $y$ is the dependent variable, $x$ and $z$ are explanatory variables, $a$ and $b$ are the parameters and $e$ is an error term. I have parameter estimates of $a$ and $b$ and a covariance matrix of these estimates. How do I test if $a$ and $b$ are significantly different?
Best Answer
Assessing the hypothesis that $a$ and $b$ are different is equivalent to testing the null hypothesis $a - b = 0$ (against the alternative that $a-b\ne 0$).
The following analysis presumes it is reasonable for you to estimate $a-b$ as $$U = \hat a - \hat b.$$ It also accepts your model formulation (which often is a reasonable one), which--because the errors are additive (and could even produce negative observed values of $y$)--does not permit us to linearize it by taking logarithms of both sides.
The variance of $U$ can be expressed in terms of the covariance matrix $(c_{ij})$ of $(\hat a, \hat b)$ as
$$\operatorname{Var}(U) = \operatorname{Var}(\hat a - \hat b) = \operatorname{Var}(\hat a) + \operatorname{Var}(\hat b) - 2 \operatorname{Cov}(\hat a, \hat b) = c_{11} + c_{22} - 2c_{12}^2.$$
When $(\hat a, \hat b)$ is estimated with least squares, one usually uses a "t test;" that is, the distribution of $$t = U / \sqrt{\operatorname{Var(U)}}$$ is approximated by a Student t distribution with $n-2$ degrees of freedom (where $n$ is the data count and $2$ counts the number of coefficients). Regardless, $t$ usually is the basis of any test. You may perform a Z test (when $n$ is large or when fitting with Maximum Likelihood) or bootstrap it, for instance.
To be specific, the p-value of the t test is given by
$$p = 2t_{n-2}(-|t|)$$
where $t_{n-2}$ is the Student t (cumulative) distribution function. It is one expression for the "tail area:" the chance that a Student t variable (of $n-2$ degrees of freedom) equals or exceeds the size of the test statistic, $|t|.$
More generally, for numbers $c_1,$ $c_2,$ and $\mu$ you can use exactly the same approach to test any hypothesis
$$H_0: c_1 a + c_2 b = \mu$$
against the two-sided alternative. (This encompasses the special but widespread case of a "contrast".) Use the estimated variance-covariance matrix $(c_{ij})$ to estimate the variance of $U = c_1 a + c_2 b$ and form the statistic
$$t = (c_1 \hat a + c_2 \hat b - \mu) / \sqrt{\operatorname{Var}(U)}.$$
The foregoing is the case $(c_1,c_2) = (1,-1)$ and $\mu=0.$
To check that this advice is correct, I ran the following
R
code to create data according to this model (with Normally distributed errorse
), fit them, and compute the values of $t$ many times. The check is that the probability plot of $t$ (based on the assumed Student t distribution) closely follows the diagonal. Here is that plot in a simulation of size $500$ where $n=5$ (a very small dataset, chosen because the $t$ distribution is far from Normal) and $a=b=-1/2.$In this example, at least, the procedure works beautifully. Consider re-running the simulation using parameters $a,$ $b,$ $\sigma$ (the error standard deviation), and $n$ that reflect your situation.
Here is the code.