Solved – What exactly is the precision of an estimate

estimatorsleast squaresprecision

What exactly determines the precision of an estimate (not estimator) from a finite, "real world" sample? I know there has been a similar question asked, but I think my question is different enough for a separate thread.

For my Econometrics midterm, we were given a bunch of Stata output from a handful of regressions and asked various questions about it. To simplify things, let's only focus on two tables (because that is all that is relevant to my question): Table 1 for regression 1 $y_i=b_{01}+b_{11} x_{i1} + e_{i1}$ and Table 2 for regression 2 $y_i=b_{02}+b_{12} x_{i1} + b_{22} x_{i2}+e_{i2}$ where $x_{ij}$ indicates the $j^{\mathrm{th}}$ variable for observation $i$ and $b_{ij}$ indicates the $j^{\mathrm{th}}$ coefficient of regression $i$. So yes, $x_{i1}$ is the same in both regressions. All regressions were estimated with OLS, btw.

We were asked "Is $b_{12}$ 'precisely measured' in regression 2 (explain what you mean by precisely measured)?" The same question went on to ask things about hypothesis testing and statistical significance. Let's say the standard error for $b_{11}$ is 0.5 with a p-value of $0.15$ and the standard error for $b_{12}$ is $0.12$ with a p-vale $<0.000$. In no other tables are these regressions (or any nested version of these regressions) mentioned.

The answer on the answer key was "Yes, because it is statistically significant" (but using more words). However, I answered, "No, $b_{12}$ is not measured precisely $\textit{relative}$ to that of $b_{11}$ because the standard error for $b_{11}<b_{12}$. While the estimate of the total effect ($b_{11}$) is not statistically significant, the estimate of the partial effect ($b_{12}$) increases the magnitude of the relationship more than it increases the imprecision of the estimate, leading to a statistically significant estimate of $b_{12}$."

Anywhere our lecture slides talk about precision (which is only 2 places) it is always in reference to standard errors (or variances). While, yes, statistical significance involves standard errors, I was under the impression that the coefficient is the magnitude of the relationship, the standard errors (or variances) of the estimate referred to the precision of the estimate, and statistical significance refers to the ratio of the two (more or less, after adjusting for degrees of freedom).

Best Answer

In statistics, we formally define precision to be the inverse of variance.

The problem with comparing the standard errors of different regression coefficients in the same model is that the covariates may not be scaled. For a binary covariate, the standard deviation is at most 0.25, however continuous covariates can have arbitrarily large standard deviations. The regression coefficient is proportional to the standard deviation of the covariate. If they are not scaled, comparing their SEs is useless.

It is also incorrect to say that something is "precisely measured" because the inference on that regression coefficient is statistically significant. At best we can conclude that its value is non-zero, and cite a 0.05 (assumed) false positive error rate. Further, if the the null hypothesis actually is true, you may have a very narrow CI indicating a high degree of precision, you wouldn't reject the null hypothesis because it's true, yet this answer would suggest otherwise.

The way to answer this question is to appeal to what is known about the effect from previous studies. If these data come from a confirmatory type of study where there have been mixed findings, or similar reports on effects with 95% confidence intervals, you would use that knowledge to rank your study in terms of precision.

Related Question