Regression Analysis – Test for Significance of Regression Coefficient Compared to Null Model

hypothesis testingrregression

I'm running a logistic regression in R and trying to assess whether the estimated coefficient is different from the expected coefficient from a custom null model (not the built-in/standard null-hypothesis that the coefficient is 0). However, I'm having a bit of trouble because both the estimated coefficient and the distribution of coefficients from the custom null model have standard errors, which I'd like to account for. Here's what I have done so far:

  1. Performed a logistic regression with the glm function on a dataset to obtain a single regression coefficient and its associated standard error
  2. Developed a custom null model which creates a number of datasets, each with a simulated binary response variable. For each dataset, I perform an identical logistic regression with glm to obtain a regression coefficient and its associated standard error. This results in a set of "null" coefficients and their associated standard errors.

Is there any way to then test (ideally giving a p-value) whether the coefficient from (1) is different from the distribution of coefficients from (2) while also accounting for the error of all of these coefficients (1 and 2)? I've searched google/CV with every keyword I can think of to no avail.

Best Answer

I decided to calculate a weighted mean coefficient and standard error (using the diagis package in R) for the null model (this seems reasonable given the distribution of the coefficients is roughly normally distributed). I then compared the two coefficients and their standard errors using the following Z test (more details in this other CV post):

$Z = \frac{\beta_1-\beta_2}{\sqrt{(SE\beta_1)^2+(SE\beta_2)^2}}$

I then calculated the p-value using the pnorm function in R.