Solved – Standard deviation of least-squares standard error

least squares

I do a measurement where I collect a set of data and fit it to a linear model using ordinary least squares.
From that I get a slope, b and the standard error of it, s.
Now I repeat the measurement N times and get N slopes and N standard errors. I want to derive the standard deviation of the slope. How do I incorporate the standard errors here?

Best Answer

As I have less than 50 reputations, I'll post this as an answer. A standard way to deal with these kind of problems would be to fit a multilevel model. If $y_{ij}$ is your measurement of your outcome for unit $i$ at the $j$th wave of data collection, the model assumes that

$$ y_{ij} = \alpha_j + \beta_j x_{ij} + \epsilon_{ij}$$

(note the subscripts on the coefficients) where

$$(\alpha_j, \beta_j) \sim \mathcal N(\mu, \Sigma)$$

$$\epsilon_{ij} \sim \mathcal N(0, \Omega),$$ where $\epsilon_{ij}$ and $(\alpha_j,\beta_j)$ are independently distributed. Although the distributions do not have to be Normal, the Normal distribution is the natural choice in many applications. Also it is often assumed that $\Omega = \sigma I$, although this assumption can be easily relaxed. If you have more than one predictor in your regression, $\beta_j$ would be a vector. Note that $\Sigma$ (the covariance matrix of the regression coefficients) will contain the variance of the intercept, slope, and their covariances. You might test whether they are significantly different from zero with likelihood-ratio tests. However, you have to be careful as the null-hypothesis would lie at the boundary of the parameter space.

Related Question