Estimating CVs. The coefficient of variation (CV) $\kappa = \sigma/\mu.$ It can be
estimated by $\hat \kappa = K = S/\bar X,$ where $\bar X$ and $S$
are the sample mean and SD, respectively. For small $n,$ this estimate
is biased on the low side, but for moderate and large samples
the bias is small. Methods of finding confidence intervals (CIs)
for the CV depend on the nature of the underlying distribution.
Because the type of population distribution may be unknown, it may
be useful to use a nonparametric bootstrap CI for the $\kappa.$
Because the population may be skewed (especially right-skewed) in
practice, the bootstrap must anticipate skewness.
Because I found the literature on CIs for the CV to be partly
hidden behind dollar barriers, and partly poorly explained, I'm
wondering if bootstrap CIs may be the best solution for your application. I gave
two examples of bootstrap CIs below, one using a sample from a
normal population and one using a sample from a gamma population.
At least, you can compare these results with results from formulas
you may find in your Internet searches.
Bootstrap CIs. If we knew the distribution of $V = K - \kappa,$ we could find
bounds $L$ and $U$ cutting 2.5% from its lower and upper tails,
respectively to get $P(L < K - \kappa < U) = 0.95,$ from which
we would obtain the 95% CI $(K - U, K - L)$ for $\kappa.$
Not knowing the distribution of $V,$ we re-sample from our data
$X = (X_1, X_2, \dots, X_n).$ Iteratively we find re-samples
of size $n$ with replacement from $X,$ find $K^* = S^*/\bar X^*$
and then $V* = K^* - \kappa^*$ for each re-sample, where
the observed CV $K_{obs}$ from the original sample $X$ is used
for $\kappa^*.$ Finally, we get $L^*$ and $U^*$ by cutting 2.5%
from each tail of the $V^*$'s, the 'bootstrapped' values of $V$,
and use these estimated bounds to get the a 95% bootstrap CI.
Examples of Bootstrap CIs. As a demonstration, I use a sample $X$ if $n = 100$ from
$\mathsf{Norm}(\mu = 200, \sigma=25)$ with $\kappa = 0.125.$
In the outline above of the bootstrap procedure, $*$'s represented
quantities based on re-sampling. In the R program below we use .re
for the same purpose.
Note: It is important to understand that re-sampling does not
create additional information. Re-sampling exploits information in existing
data to do statistical analysis.
Normal. For the particular normal sample we used $K_{obs} = 0.118$, and
the 95% nonparametric bootstrap CI obtained is $(0.102, 0.135).$
Because bootstrap procedures involve random re-sampling, each run
of the program may give a slightly different CI, but not much
different with as many as $B = 10^5 = 100,000$ iterations.
x = rnorm(100, 200, 25)
k.obs = sd(x)/mean(x); k.obs
## 0.1180088
B = 10^5; v.re = numeric(B)
for(i in 1:B) {
x.re = sample(x, 100, repl=T)
k.re = sd(x.re)/mean(x.re)
v.re[i] = k.re - k.obs }
UL = quantile(v.re, c(.975,.025))
k.obs - UL
## 97.5% 2.5%
## 0.1018754 0.1350186
Gamma. This bootstrap procedure is called 'nonparametric' because it does
not assume any particular type of distribution for the data. A
second sample of size $n = 100$ was taken from the distribution
$\mathsf{Gamma}(shape=\alpha = 4, rate=\lambda=.1)$ with
$\kappa = \sqrt{\alpha}/\alpha = 1/2.$ This sample has $K = 0.507$
and the 95% nonparametric bootstrap CI is $(0.442, 0.579).$
A second run of the bootstrap program with the same data gave
the CI $(0.442, 0.580).$
A two-sided interval for the variance is
$$\left[\frac{(n-1)S^2}{\chi_{1 - \alpha/2, n-1}^2}, \frac{(n-1)S^2}{\chi_{\alpha/2, n-1}^2}\right]$$ where $S^2$ is the sample variance, $n$ is the sample size, and $\chi_{\alpha/2, n-1}^2$ is the $\alpha/2$ quantile of the chi-square distribution with $n-1$ degrees of freedom; i.e., it is the value at which the CDF of a $\chi_{n-1}^2$ random variable equals $\alpha/2$; similarly, $\chi_{1-\alpha/2, n-1}^2$ is the $1 - \alpha/2$ quantile.
For example, with $n = 9$ and $\alpha = 0.05$, we have $$\chi_{0.025, 8}^2 \approx 2.17973, \\ \chi_{0.975, 8}^2 \approx 17.5345.$$ Notice that because these are in the denominators of the confidence limits, the larger quantile is used for the lower confidence limit, and the smaller quantile is used for the upper confidence limit.
Best Answer
You should be doing $$\bar{x}\mp t_{0.975}(n-1)\frac{\hat{\sigma}}{\sqrt{n}}$$
In this case, $\bar{x}=9.6$, $t_{0.975}(24)=2.064$, and $\frac{\hat{\sigma}}{\sqrt{n}}=\frac{\sqrt{22.4}}{\sqrt{25}}\times\frac{\sqrt{25}}{\sqrt{\color{red}{25-1}}}$ since we require an unbiased estimate of the population variance, based on the sample variance.
This gives us the supposed answer $7.606$ for the lower bound, but for the upper bound we should get $11.594$
One possible explanation for their value of $11.590$ is the unintended use of $t_{0.975}(25)=2.060$ just for the upper bound, which would give precisely that, so it could be an error on their part.