Solved – General Relationship Between Standard Error and Sample Size

standard error

Suppose I empirically estimate the standard error of some statistic to be 10% (perhaps I do this by bootstrapping).

Now, I want to know how much I need to increase the sample size to reduce the error to 5%. The estimate and sample are arrived at through complex procedures, and theoretically determining the relationship between sample size and standard error is not feasible. However, I believe that the standard error decreases as sample sizes increases.

Is it plausible to assume that standard error is proportional to the inverse of the square root of n (based on the standard error of a sample mean using simple random sampling)?

se = s / sqrt( n )

Do standard errors behave (very) roughly in this way in general in relation to sample size, regardless of the estimate and the sampling procedure? How bad of an assumption is this?

Best Answer

regardless of the estimate and the sampling procedure?

No, you will not have a "root-n" effect regardless of those things, since at least some standard errors do not scale with $\sqrt{n}$.

Many do -- quite possibly all the ones you will be likely to use -- but that's not all of them.

For things that do scale with $\sqrt{n}$ then you expect to halve the standard error by quadrupling sample size. So (at least if we're ignoring sampling variation in the estimate of $\sigma$), that's probably what you need.

One example of something that isn't proportional to $\frac{1}{\sqrt{n}}$ is the standard error of a kernel density estimate when the bandwidth is itself chosen as a function of $n$. [For some common choices of bandwidth formula the standard error goes down as $n^{-2/5}$ instead.]

Related Question