Solved – How to calculate standard error of sample quantile from normal distribution with known mean and standard deviation

quantilesstandard error

I know that the standard error of the mean for an iid sample is calculated as $$\frac{\sigma}{\sqrt{n}}$$

However, assuming a normal distribution with known mean and standard deviation, how do you calculate the standard error of an arbitrary quantile?

For example, assume

  • normal distribution
  • population mean = 0
  • population standard deviation = 1
  • n=100
  • quantile = .95

What would be the standard error of this quantile?


I ran this little simulation to explore the properties, but I'm still interested in the closed form solution:

set.seed(1234)
generate_x <- function(n) x <- rnorm(n)
k <- 10000
n <- 100

results <- lapply(seq(k), function(X) generate_x(100))

Z <- seq(.01, .99, .01)
qresults <- sapply(results, function(X) quantile(X, Z))

sd_quresults <- apply(qresults, 1, sd)
var_quresults <- apply(qresults, 1, var)

plot(Z, sd_quresults, type='l')

enter image description here

Best Answer

This at least gives some pointers for - and a partial answer to - this question.

In the case of sample quantiles, the standard error depends on which definition of sample quantiles you actually use. I believe R, for example, includes 9 different definitions of quantiles in its quantile function.

For cases where the sample quantile is an exact order statistic, the standard error of the sample quantile follows from the standard error of that order statistic.

If a quantile is based on some weighted average of two order statistics, then the standard error can be obtained from their variances, their covariance and the weights.

As a result, confidence intervals can be formed; in the case of a quantile being an order statistic, a binomial distribution can be used to form a nonparametric interval directly from order statistics.