Here are three rather complementary recommendations:
Introductory Mathematical Statistics: Principles and Methods by Erwin Kreyszig
This classic is an elementary introduction which presents in the first part Descriptive statistics, followed by Probability Theory and the main part is Statistical Inference.
E. Kreyszig has the talent to make things easily understandable, he comes straight to the point and puts the focus on the essentials. The book covers what you need and is full of nice examples which help to make the theory behind it clear and comprehensible.
Note: Years ago I became acquainted with one of his books when I was reading Introductory Functional Analysis. This is a great, very simple introduction into the subject without requiring measure theory. Since that time I am a fan of him.
The next two recommendations are example centered.
Schaum's Statistics by Murray R. Spiegel and Larry J. Stephens
No need to explain anything about Schaum's outline. They all contain typically good and sometimes great examples for training.
I'd like to instead cite from chapter XXII from Indiscrete Thoughts by Gian-Carlo Rota:
Anyone who is about to teach the undergraduate mathematics curriculum should come down to earth by looking through The Schaum's Outlines before burdening the class with those well printed, many-colored, highly advertised hardcover volumes that are pathetically passed off as textbooks.
and the third one:
Introductory Statistics with R by Peter Dalgaard
In order to improve understanding and get a better feeling for the material it is extraordinary helpful to play around with many, many examples. So, if a computer is available and you are willing and able to write some code snippets, I strongly recommend to heavily use it for many examples.
Since I'm somewhat experienced in R programming, I recommend the book above. But any other computer language with statistics package and some supporting literature may also do the job.
First, let's get the notation and definitions right;
The sample mean $\bar X = \frac 1n\sum_{i=1}^n X_i.$
If the population mean $\mu$ is unknown and estimated by $\bar X,$
then the population variance $\sigma^2$ is estimated by the sample variance
$S^2 = \frac{1}{n-1}\sum_{i=1}^n (X_i - \bar X)^2.$
Then
$$\frac{(n-1)S^2}{\sigma^2} = \frac{\sum_{i-1}^n(X_i - \bar X)^2}{\sigma^2}
\sim \mathsf{Chisq}(df = n-1).$$
For your dataset the statistics are:
x = c(22.2, 24.7, 20.9, 26.0, 27.0, 24.8, 26.5, 23.8, 25.6, 23.9)
n = length(x); a = mean(x); s = sd(x)
n; a; s
## 10 # sample size
## 24.54 # sample mean
## 1.912648 # sample SD
Then 95% confidence interval for the population variance $\sigma^2$
is obtained as
$$((n-1)S^2/U,\, (n-1)S^2/L),$$ where $L$ and $U$ cut 2.5% of the
probability from the lower and upper tails, respectively, of
$\mathsf{Chisq(n-1)}.$ Computations of CIs for $\sigma^2$ and $\sigma$ in R statistical software
follow:
UL = qchisq(c(.975, .025), n - 1); UL
## 19.022768 2.700389
CI = (n-1)*s^2 / UL; CI
## 1.730768 12.192315 95% CI for pop var
sqrt(CI)
## 1.315587 3.491750 95% CI for pop SD
Notice that $S = 1.913$ is contained in the CI for $\sigma$ as it
must be, but that $S$ is not at the center of the CI, because the
chi-squared distribution is skewed.
I assume you can use the appropriate quantiles of $\mathsf{Chisq}(9)$
to get 99% confidence intervals.
Addendum per Comments for 99% CIs: Of course, 99% confidence intervals
have to be longer than 95% CIs.
UL = qchisq(c(.995, .005), n - 1); UL
## 23.589351 1.734933 # same as you showed in your question
CI = (n-1)*s^2 / UL; CI
## 1.395715 18.977103 # using correct numerator, this is different
sqrt(CI)
## 1.181404 4.356272
Best Answer
From comments:
This would be a binomial proportion confidence interval. There are various different approaches.
You do have a sample mean for the faulty proportion, $\frac{59}{320}$, and a positive sample variance