Absent further clarification from the OP, here's what I think is happening:
Each sample consists of $N$ values drawn from a binomial distribution spanning the integer range $[5, 30]$. As a population, the draws from this binomial distribution have mean $17.5$ and standard deviation about $3.82$ (I think, I have to double check this).
With a sample size of just $N$, however, the average of the sample is not generally $17.5$. It will be some value which is overall closer to the sample than the population mean would be. Hence, the standard deviation of that $N$-count sample, treated as a population, will systematically underestimate the standard deviation of the population.
For example, with $N = 3$, if you draw $12, 14, 16$, you have an average of $14$. The standard deviation of the sample, treated as the population, is about $1.63$. But the RMS distance from the actual population mean of $17.5$ is $6.69$. The disparity arises from the sample average being closer to the data, overall, than the population mean.
The larger $N$ is, the smaller the expected disparity, and that's perhaps why you obtain a smaller standard deviation for $N = 3$ than you do for $N = 10$.
ETA: Dividing by $N-1$ instead of $N$ should produce an unbiased estimator for the population standard deviation.
The notion of confidence interval is somewhat intuitive, but that may be keeping you from understanding what it means in more depth.
Say I have multiple samples $x_i$ from a population, and I wish to estimate the population mean $\mu$. A CI of, say, 95\% represents an interval of possible values of $\mu$ such that given my samples, the "probability" that the $\mu$ lies in that interval is 95%.
We immediately see that there can be more than one such interval, since I could trade probability past the upper end for probability at the lower end of the interval, thus shifting the interval. Let's skirt that issue by demanding a symmetric interval about my sample mean.
But the "probability" is not well defined from the information I just presented!
In order to assign a probability, I have to make some assumptions about the population. The usual assumption is that the population variance is equal to the unbiased estimator of variance obtained from our sample. But we still have things backward: We can't honestly talk about the probability of the population mean being in some range, without any assumption about the a priori (before I saw my samples) probabilities of the mean being various values.
So we apply the usual sleight-of-mind logic employed by the frequentist point of view. We ask:
Given that the population variance is our unbiased sample variance estimate, What are the highest and lowest values of the population mean $\mu$ such that the chance of or sample being as far away from $\mu$ as it is, is lower than 100%-95% = 5%.
Now let's go back to your problem. Since the population is finite, as you draw more samples (without replacement) you actually do learn something about the population. If you had drawn all the objects but one, if you take your sample unbiased variance as the population variance, your 95% confidence interval for the value of that remaining one object would be roughly $2\sigma$ but your estimate of the population mean will have a variance of $\sigma/N$. This is quite a bit smaller than would be the case for an infinite population or a small sampling of a large population.
Now when you draw that last sample, you know everything about the distribution. In particular, you know the mean exactly. Therefore any interval that includes the actual mean is a 100% CI. If you then say that the real CI is the tightest such interval, then it has width zero.
Best Answer
I think you may be trying to find the sample size necessary to achieve a certain margin of error in a confidence interval of the type
$$\text{Parameter Estimate} \pm \text{Margin of Error}.$$
(1) Suppose you are going to have $n$ observations from a normal population with unknown population mean $\mu$ and known population standard deviation $\sigma_0.$ Then a 95% confidence interval (CI) is
$$\bar X \pm 1.96 \sigma_0/\sqrt{n},$$
where $\bar X$ is the sample mean and $1.96 \sigma_0/\sqrt{n}$ is the margin of error. If you want to have a specific margin of error $E$ in your CI, then you set $E = 1.96 \sigma_0/\sqrt{n}.$ Everything but $n$ is known. Solve for $n$ and you know how many observations to take.
(2) Suppose you are doing a poll to see how popular Caidate X is in the weeks before an election. Then you want to estimate the population proportion $p$ in favor on Candidate X. You will estimate this is $\hat p = X/n$, where $X$ is the number of interviewed people currently favoring Candidate X, and $n$ is the number of people interviewed. Then a 95% CI for $p$ takes the form $$\hat p \pm 1.96\sqrt{\hat p (1 - \hat p)/n}.$$
Here the margin of error is $1.96 \sqrt{p(1-p)/n}$, but you don't know $p$. So, for planning purposes you might use $p = 1/2$ and set your desired margin of error $$E = 1.96 \sqrt{p(1-p)/n} = 1.96\sqrt{.5(1-.5)/n} \approx 1/\sqrt{n}.$$ Then you can solve for $n$ and you will know how many subjects to interview.
Undoubtedly, you will get different answers for $n$ depending on whether you use the formula in (1) or the formula in (2). And there are other kinds of formulas for other kinds of problems.