For the more restricted question
Why is a biased standard deviation formula typically used?
the simple answer
Because the associated variance estimator is unbiased. There is no real mathematical/statistical justification.
may be accurate in many cases.
However, this is not necessarily always the case. There are at least two important aspects of these issues that should be understood.
First, the sample variance $s^2$ is not just unbiased for Gaussian random variables. It is unbiased for any distribution with finite variance $\sigma^2$ (as discussed below, in my original answer). The question notes that $s$ is not unbiased for $\sigma$, and suggests an alternative which is unbiased for a Gaussian random variable. However it is important to note that unlike the variance, for the standard deviation it is not possible to have a "distribution free" unbiased estimator (*see note below).
Second, as mentioned in the comment by whuber the fact that $s$ is biased does not impact the standard "t test". First note that, for a Gaussian variable $x$, if we estimate z-scores from a sample $\{x_i\}$ as
$$z_i=\frac{x_i-\mu}{\sigma}\approx\frac{x_i-\bar{x}}{s}$$
then these will be biased.
However the t statistic is usually used in the context of the sampling distribution of $\bar{x}$.
In this case the z-score would be
$$z_{\bar{x}}=\frac{\bar{x}-\mu}{\sigma_{\bar{x}}}\approx\frac{\bar{x}-\mu}{s/\sqrt{n}}=t$$
though we can compute neither $z$ nor $t$, as we do not know $\mu$. Nonetheless, if the $z_{\bar{x}}$ statistic would be normal, then the $t$ statistic will follow a Student-t distribution. This is not a large-$n$ approximation. The only assumption is that the $x$ samples are i.i.d. Gaussian.
(Commonly the t-test is applied more broadly for possibly non-Gaussian $x$. This does rely on large-$n$, which by the central limit theorem ensures that $\bar{x}$ will still be Gaussian.)
*Clarification on "distribution-free unbiased estimator"
By "distribution free", I mean that the estimator cannot depend on any information about the population $x$ aside from the sample $\{x_1,\ldots,x_n\}$. By "unbiased" I mean that the expected error $\mathbb{E}[\hat{\theta}_n]-\theta$ is uniformly zero, independent of the sample size $n$. (As opposed to an estimator that is merely asymptotically unbiased, a.k.a. "consistent", for which the bias vanishes as $n\to\infty$.)
In the comments this was given as a possible example of a "distribution-free unbiased estimator". Abstracting a bit, this estimator is of the form $\hat{\sigma}=f[s,n,\kappa_x]$, where $\kappa_x$ is the excess kurtosis of $x$. This estimator is not "distribution free", as $\kappa_x$ depends on the distribution of $x$. The estimator is said to satisfy $\mathbb{E}[\hat{\sigma}]-\sigma_x=\mathrm{O}[\frac{1}{n}]$, where $\sigma_x^2$ is the variance of $x$. Hence the estimator is consistent, but not (absolutely) "unbiased", as $\mathrm{O}[\frac{1}{n}]$ can be arbitrarily large for small $n$.
Note: Below is my original "answer". From here on, the comments are about the standard "sample" mean and variance, which are "distribution-free" unbiased estimators (i.e. the population is not assumed to be Gaussian).
This is not a complete answer, but rather a clarification on why the sample variance formula is commonly used.
Given a random sample $\{x_1,\ldots,x_n\}$, so long as the variables have a common mean, the estimator $\bar{x}=\frac{1}{n}\sum_ix_i$ will be unbiased, i.e.
$$\mathbb{E}[x_i]=\mu \implies \mathbb{E}[\bar{x}]=\mu$$
If the variables also have a common finite variance, and they are uncorrelated, then the estimator $s^2=\frac{1}{n-1}\sum_i(x_i-\bar{x})^2$ will also be unbiased, i.e.
$$\mathbb{E}[x_ix_j]-\mu^2=\begin{cases}\sigma^2&i=j\\0&i\neq{j}\end{cases} \implies \mathbb{E}[s^2]=\sigma^2$$
Note that the unbiasedness of these estimators depends only on the above assumptions (and the linearity of expectation; the proof is just algebra). The result does not depend on any particular distribution, such as Gaussian. The variables $x_i$ do not have to have a common distribution, and they do not even have to be independent (i.e. the sample does not have to be i.i.d.).
The "sample standard deviation" $s$ is not an unbiased estimator, $\mathbb{s}\neq\sigma$, but nonetheless it is commonly used. My guess is that this is simply because it is the square root of the unbiased sample variance. (With no more sophisticated justification.)
In the case of an i.i.d. Gaussian sample, the maximum likelihood estimates (MLE) of the parameters are $\hat{\mu}_\mathrm{MLE}=\bar{x}$ and $(\hat{\sigma}^2)_\mathrm{MLE}=\frac{n-1}{n}s^2$, i.e. the variance divides by $n$ rather than $n^2$. Moreover, in the i.i.d. Gaussian case the standard deviation MLE is just the square root of the MLE variance. However these formulas, as well as the one hinted at in your question, depend on the Gaussian i.i.d. assumption.
Update: Additional clarification on "biased" vs. "unbiased".
Consider an $n$-element sample as above, $X=\{x_1,\ldots,x_n\}$, with sum-square-deviation
$$\delta^2_n=\sum_i(x_i-\bar{x})^2$$
Given the assumptions outlined in the first part above, we necessarily have $$\mathbb{E}[\delta^2_n]=(n-1)\sigma^2$$
so the (Gaussian-)MLE estimator is biased
$$\widehat{\sigma^2_n}=\tfrac{1}{n}\delta^2_n \implies \mathbb{E}[\widehat{\sigma^2_n}]=\tfrac{n-1}{n}\sigma^2
$$
while the "sample variance" estimator is unbiased
$$s^2_n=\tfrac{1}{n-1}\delta^2_n \implies \mathbb{E}[s^2_n]=\sigma^2$$
Now it is true that $\widehat{\sigma^2_n}$ becomes less biased as the sample size $n$ increases. However $s^2_n$ has zero bias no matter the sample size (so long as $n>1$). For both estimators, the variance of their sampling distribution will be non-zero, and depend on $n$.
As an example, the below Matlab code considers an experiment with $n=2$ samples from a standard-normal population $z$. To estimate the sampling distributions for $\bar{x},\widehat{\sigma^2},s^2$, the experiment is repeated $N=10^6$ times. (You can cut & paste the code here to try it out yourself.)
% n=sample size, N=number of samples
n=2; N=1e6;
% generate standard-normal random #'s
z=randn(n,N); % i.e. mu=0, sigma=1
% compute sample stats (Gaussian MLE)
zbar=sum(z)/n; zvar_mle=sum((z-zbar).^2)/n;
% compute ensemble stats (sampling-pdf means)
zbar_avg=sum(zbar)/N, zvar_mle_avg=sum(zvar_mle)/N
% compute unbiased variance
zvar_avg=zvar_mle_avg*n/(n-1)
Typical output is like
zbar_avg = 1.4442e-04
zvar_mle_avg = 0.49988
zvar_avg = 0.99977
confirming that
\begin{align}
\mathbb{E}[\bar{z}]&\approx\overline{(\bar{z})}\approx\mu=0 \\
\mathbb{E}[s^2]&\approx\overline{(s^2)}\approx\sigma^2=1 \\
\mathbb{E}[\widehat{\sigma^2}]&\approx\overline{(\widehat{\sigma^2})}\approx\frac{n-1}{n}\sigma^2=\frac{1}{2}
\end{align}
Update 2: Note on fundamentally "algebraic" nature of unbiased-ness.
In the above numerical demonstration, the code approximates the true expectation $\mathbb{E}[\,]$ using an ensemble average with $N=10^6$ replications of the experiment (i.e. each is a sample of size $n=2$). Even with this large number, the typical results quoted above are far from exact.
To numerically demonstrate that the estimators are really unbiased, we can use a simple trick to approximate the $N\to\infty$ case: simply add the following line to the code
% optional: "whiten" data (ensure exact ensemble stats)
[U,S,V]=svd(z-mean(z,2),'econ'); z=sqrt(N)*U*V';
(placing after "generate standard-normal random #'s" and before "compute sample stats")
With this simple change, even running the code with $N=10$ gives results like
zbar_avg = 1.1102e-17
zvar_mle_avg = 0.50000
zvar_avg = 1.00000
Best Answer
Short version: This will work with non-Normal data, provided the sample size is large enough.
Longer version: Your population is summarized by the random variable $X$. However, we aren't conducting inference on $X$ - we're conducting inference on $\bar{X}$. That is, we are relying on the sampling distribution of the sample mean. As you may remember from any class or textbook, this relies on something called the Central Limit Theorem. The CLT basically states that the distribution of $\bar{X}$ is exactly Normal if $X$ is exactly Normal, and is asymptotically (approximately for large sample sizes) Normal when $X$ is not Normal. Thus, as long as your sample size $n$ is large (most references use $n\approx 30$ as a threshold), then $\bar{X}$ will be approximately Normal which allows us to use $Z$ and the Normal distribution to generate confidence intervals.