$\newcommand{\+}{^{\dagger}}
\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack}
\newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,}
\newcommand{\dd}{{\rm d}}
\newcommand{\down}{\downarrow}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}
\newcommand{\fermi}{\,{\rm f}}
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{{\rm i}}
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\isdiv}{\,\left.\right\vert\,}
\newcommand{\ket}[1]{\left\vert #1\right\rangle}
\newcommand{\ol}[1]{\overline{#1}}
\newcommand{\pars}[1]{\left(\, #1 \,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,}
\newcommand{\sech}{\,{\rm sech}}
\newcommand{\sgn}{\,{\rm sgn}}
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}
\newcommand{\wt}[1]{\widetilde{#1}}$
\begin{align}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!{\rm P}\pars{\xi}&=
\int_{-\infty}^{\infty}\dd x\,{1 \over \root{2\pi}\sigma}\,
\exp\pars{-\,{x^{2} \over 2\sigma^{2}}}
\\ & \
\int_{-\infty}^{\infty}\dd y\,{1 \over \root{2\pi}\sigma}\,
\exp\pars{-\,{y^{2} \over 2\sigma^{2}}}\
\overbrace{\delta\pars{\xi - xy}}^{\ds{\delta\pars{y - \xi/x} \over \verts{x}}}
\\[3mm]&={1 \over 2\pi\sigma^{2}}\int_{-\infty}^{\infty}
\exp\pars{-\,{1 \over 2\sigma^{2}}\bracks{x^{2} + {\xi^{2} \over x^{2}}}}\,
{\dd x \over \verts{x}}
\\[3mm]&={1 \over \pi\sigma^{2}}\int_{0}^{\infty}
\exp\pars{-\,{1 \over 2\sigma^{2}}\bracks{x^{2} + {\xi^{2} \over x^{2}}}}\,
{\dd x \over x}\tag{1}
\end{align}
$\ds{\delta\pars{x}}$ is the
Dirac Delta Function.
With $\ds{x \equiv A\expo{\theta/2}\,,\quad A > 0\,,\quad\theta \in {\mathbb R}}$:
\begin{align}
{\rm P}\pars{\xi}&={1 \over \pi\sigma^{2}}
\int_{-\infty}^{\infty}
\exp\pars{-\,{1 \over 2\sigma^{2}}
\bracks{A^{2}\expo{\theta} + {\xi^{2} \over A^{2}}\expo{-\theta}}}\,
\pars{A\expo{\theta/2}\,\dd\theta/2 \over A\expo{\theta/2}}
\end{align}
We can choose $\ds{A}$ such that
$\ds{A^{2} = {\xi^{2} \over A^{2}}\quad\imp\quad A = \verts{\xi}^{1/2}}$:
\begin{align}
{\rm P}\pars{\xi}&={1 \over 2\pi\sigma^{2}}
\int_{-\infty}^{\infty}
\exp\pars{-\,{\verts{\xi} \over \sigma^{2}}\cosh\pars{\theta}}\,\dd\theta
={1 \over \pi\sigma^{2}}
\int_{0}^{\infty}
\exp\pars{-\,{\verts{\xi} \over \sigma^{2}}\cosh\pars{\theta}}\,\dd\theta
\end{align}
$$\color{#00f}{\large%
{\rm P}\pars{\xi} = {1 \over \pi\sigma^{2}}\,
{\rm K}_{0}\pars{\verts{\xi} \over \phantom{2}\sigma^{2}}}
$$
where $\ds{{\rm K}_{\nu}\pars{z}}$ is a
Second Kind Bessel Function.
You wrote "which is an expression of the central limit theorem". That is not correct. You started with a normal distribution. The central limit theorem says if you start with any distribution with finite variance, not assumed to be normal, the sum of i.i.d. copies will be nearly normally distributed.
Your main question seems to be: how does the chi-square distribution get involved?
You have $X_1,\ldots,X_n\sim\text{ i.i.d. } N(\mu,\sigma^2)$ and $\bar X = (X_1+\cdots+X_n)/n$. The proposition then is
$$
\frac 1 {\sigma^2} \Big((X_1-\bar X)^2 + \cdots + (X_n - \bar X)^2 \Big) \sim \chi^2_{n-1}.
$$
The random variables $(X_i - \bar X)/\sigma$ are not independent, but have covariance $-1/n$ between any two of them.
The standard deviation of each of them is not $1$, but $\sqrt{(n-1)/n\,{}}$.
There are not $n-1$ of them, but $n$.
But the distribution of the sum of their squares is asserted to be the same as if they were (1) independent and (2) each has standard deviation $1$ and (3) there are $n-1$ of them.
This can be understood geometrically, as follows: The mapping $(X_1,\ldots,X_n)\mapsto (\bar X, \ldots, \bar X)$ is the orthogonal projection onto a $1$-dimensional subspace of $\mathbb R^n$, and it complementary mapping $(X_1-\bar X,\ldots,X_n-\bar X)$ is the orthogonal projection onto the $(n-1)$-dimensional subspace defined by the constraint that the sum of the coordinates is $0$. Notice that the latter projection takes the mean vector $(\mu,\ldots,\mu)$ to $(0,\ldots,0)$.
The distrbution of $(X_1,\ldots,X_n)$ is spherically symmetric about the mean vector, since it is
$$
\text{constant}\cdot \exp\left( -\frac 1 2 \sum_{i=1}^n\left( \frac{x_i-\mu} \sigma \right)^2 \right)\,dx_1\,\cdots\,dx_n.
$$
The distribution of an orthogonal projection of this random vector, which projection takes the mean vector to $0$, is spherically symmetric about $0$ in the lower-dimensional space onto which one projects. So let $(U_1,\ldots,U_{n-1})$ be the coordinates of $(X_1-\bar X,\ldots, X_n-\bar X)$ relative to some orthonormal basis of that $(n-1)$-dimensional subspace, and then you have
$$
(X_1-\bar X)^2 + \cdots + (X_n-\bar X)^2 = U_1^2 + \cdots + U_{n-1}^2 \tag 1
$$
and
$$
U_2,\ldots,U_n\sim\text{ i.i.d. }N(0,\sigma^2).
$$
Therefore $(1)$ is distributed as $\sigma^2\chi^2_{n-1}$.
Also, notice that $\bar X$ is actually independent of $(X_1 - \bar X,\ldots,X_n-\bar X)$. That follows from the fact that they are jointly normally distributed and $\operatorname{cov}(\bar X, X_i-\bar X)=0$. That is also needed in order to conclude that you get a $t$-distribution.
PS: One can do this with matrices: There is an $n\times n$ matrix $Q$ for which
$$
QX=Q\begin{bmatrix} X_1 \\ \vdots \\ X_n \end{bmatrix} = \begin{bmatrix} X_1 - \bar X \\ \vdots \\ X_n - \bar X \end{bmatrix}.
$$
It satisfies $Q^2=Q^T=Q$. And $P=I-Q$ satisfies $P^2=P^T=P$, and $QP=PQ=0$. Then we have
$$
\operatorname{cov}(PX, QX) = P\Big(\operatorname{cov}(X,X)\Big) Q^T = P(\sigma^2 I)Q^T = \sigma^2 PQ = 0.
$$
and
$$
QX \sim N(0,\sigma^2 Q\Big(\sigma^2 I\Big) Q^T) = N(0,\sigma^2 Q).
$$
(We get $0$ as the mean hear because $Q$ times the mean vector is $0$.) Our denominator in the $t$-statistic is then $\|QX\|/\sqrt n$.
PS: Here's somewhat different way of expressing it. The vector $(X_1,\ldots,X_n)$ has expected value $(\mu,\ldots,\mu)$. Let $(U_1,\ldots,U_{n-1},U_n)$ be the coordinates of the point $(X_1,\ldots,X_n)$ in a different coordinate system: The $n$th component $U_n$ is the position on an axis pointing from $(0,\ldots,0)$ in the direction of $(\mu,\ldots,\mu)$, and let the other $U_k$, for $k=1,\ldots,n-1$ are components in directions at right angles to that one. In the first coordinate system, the projection of $(X_1,\ldots,X_n)$ onto the space in which the sum of the components is $0$ is $(X_1-\bar X,\ldots,X_n-\bar X)$. In the second coordinate system the projection of $(U_1,\ldots,U_{n-1},U_n)$ onto that same space is $(U_1,\ldots,U_{n-1},0)$. Transforming $(\mu,\ldots,\mu)$ into the second coordinate system, we get $(0,\ldots,0,\mu\sqrt n)$, and the projection of that onto the aforementioned subspace is $(0,\ldots,0,0)$. Hence $(X_1-\bar X)^2+\cdots+(X_n-\bar X)^2 = U_1^2+\cdots+U_{n-1}^2$ and $U_1,\ldots,U_{n-1}\sim\text{ i.i.d. } N(0,\sigma^2)$.
Best Answer
Not an answer, but too long to comment.
First, observe that if $\boldsymbol{X} \sim \mathcal{N}\left(\boldsymbol{0}, I\right)$ (where $I$ is the identity matrix), then $A \boldsymbol{X} + \boldsymbol{b} \sim \mathcal{N}\left(\boldsymbol{b}, A A^T\right)$, and vice versa, if $\boldsymbol{Y} \sim \mathcal{N}(\boldsymbol{\mu}, S)$, then $S^{-1/2} \left(\boldsymbol{Y} - \boldsymbol{\mu}\right) \sim \mathcal{N}\left(\boldsymbol{0}, I\right)$, where $S^{-1/2} = \left(\sqrt{S}\right)^{-1}$ is the inverse of matrix square root of $S$ (which is defined according to the general theory of function on symmetric matrices).
Notice that $$\left(\boldsymbol{x} - \boldsymbol{\mu}_b\right)^T S_b^{-1} \left(\boldsymbol{x} - \boldsymbol{\mu}_b\right) = \left(\boldsymbol{x} - \boldsymbol{\mu}_b\right)^T S_b^{-1/2} S_b^{-1/2} \left(\boldsymbol{x} - \boldsymbol{\mu}_b\right) = \\ = \left(\boldsymbol{x} - \boldsymbol{\mu}_b\right)^T \left(S_b^{-1/2}\right)^T S_b^{-1/2} \left(\boldsymbol{x} - \boldsymbol{\mu}_b\right) = \left\|S_b^{-1/2} \left(\boldsymbol{x} - \boldsymbol{\mu}_b\right)\right\|^2,$$ where $\left\| \;\cdot\;\right\|$ denotes Euclidean norm, so $R_{\varepsilon} = \left\{\boldsymbol{x} : \left\|S_b^{-1/2} \left(\boldsymbol{x} - \boldsymbol{\mu}_b\right)\right\| \leq \varepsilon\right\}$. Then for $\boldsymbol{X} \sim \mathcal{N}\left(\boldsymbol{\mu}_a, S_a\right)$ $$ \mathbb{P}\left(\boldsymbol{X} \in R_{\varepsilon}\right) = \mathbb{P}\left( \left\|S_b^{-1/2} \left(\boldsymbol{X} - \boldsymbol{\mu}_b\right)\right\| \leq \varepsilon\right) = \\ = \mathbb{P}\left( \left\|S_b^{-1/2} \left(S_a^{1/2} \boldsymbol{Y} + \boldsymbol{\mu}_a - \boldsymbol{\mu}_b\right)\right\| \leq \varepsilon\right) = \mathbb{P}\left( \left\|S_b^{-1/2} S_a^{1/2} \boldsymbol{Y} + S_b^{-1/2}\left(\boldsymbol{\mu}_a - \boldsymbol{\mu}_b\right)\right\| \leq \varepsilon\right), $$ where $\boldsymbol{Y} \sim \mathcal{N}\left(\boldsymbol{0}, I\right)$. Let's introduce new vector $\boldsymbol{W} \sim \mathcal{N}\left(\boldsymbol{\nu}, \Sigma\right)$, where $\boldsymbol{\nu} = S_b^{-1/2}\left(\boldsymbol{\mu}_a - \boldsymbol{\mu}_b\right)$ and $\Sigma = S_b^{-1/2} S_a^{1/2} \left(S_b^{-1/2} S_a^{1/2}\right)^T = S_b^{-1/2} S_a^{1/2} S_a^{1/2} S_b^{-1/2} = S_b^{-1/2} S_a S_b^{-1/2}$.
So, given problem is equivalent to calculating $\mathbb{P}\left(\left\|\boldsymbol{W} \right\| \leq \varepsilon\right)$ for some non-standard Gaussian vector. It gets tricky even for 1D Gaussian. It this case for $X \sim \mathcal{N} \left(\mu_a, s_a\right)$ and some $\mu_b, s_b > 0 \in \mathbb{R}$ we have $W \sim \mathcal{N} \left(\frac{1}{\sqrt{s_b}} (\mu_a - \mu_b), \frac{s_a}{s_b} \right)$, which leads to $$\mathbb{P}\left(|W| \leq \varepsilon\right) = F_W (\varepsilon) - F_W(-\varepsilon) = \\ = \Phi\left( \frac{\varepsilon - \frac{1}{\sqrt{s_b}} (\mu_a - \mu_b)}{\sqrt{\frac{s_a}{s_b}}}\right) - \Phi\left( \frac{-\varepsilon - \frac{1}{\sqrt{s_b}} (\mu_a - \mu_b)}{\sqrt{\frac{s_a}{s_b}}}\right) = \\ = \Phi \left(\frac{\varepsilon \sqrt{s_b} - (\mu_a - \mu_b)}{\sqrt{s_a}}\right) - \Phi \left(\frac{-\varepsilon \sqrt{s_b} - (\mu_a - \mu_b)}{\sqrt{s_a}}\right),$$ where $\Phi(x) = \frac{1}{\sqrt{2\pi}} \int_0^x e^{-t^2/2} \mathrm{d} t$. As one can see, this expression becomes much simpler in case $\mu_a = \mu_b$.
But as I mentioned in comments, I don't think it's possible to obtain some closed form solution in multidimensional case (however, if $\boldsymbol{\mu}_a = \boldsymbol{\mu}_b$, things may get a bit simpler).
EDIT: for $\boldsymbol{W} \sim \mathcal{N}\left(\boldsymbol{\nu}, \Sigma\right)$ we can write $ \left\|\boldsymbol{W} \right\|^2 = \left\|\Sigma^{1/2} \boldsymbol{Y} + \boldsymbol{\nu}\right\|^2 $ for $\boldsymbol{Y} \sim \mathcal{N}\left(\boldsymbol{0}, I\right)$. Let $\left<\cdot, \cdot \right>$ denote standard dot product, then $$ \left\|\boldsymbol{W} \right\|^2 = \left<\Sigma^{1/2} \boldsymbol{Y} + \boldsymbol{\nu}, \Sigma^{1/2} \boldsymbol{Y} + \boldsymbol{\nu}\right> = \left<\Sigma^{1/2} \boldsymbol{Y}, \Sigma^{1/2} \boldsymbol{Y} \right> + 2 \left<\Sigma^{1/2} \boldsymbol{Y}, \boldsymbol{\nu}\right> + \left<\boldsymbol{\nu}, \boldsymbol{\nu} \right> = \\ = \left<\Sigma \boldsymbol{Y}, \boldsymbol{Y} \right> + 2 \left<\Sigma^{1/2} \boldsymbol{Y}, \boldsymbol{\nu}\right> + \left\|\boldsymbol{\nu}\right\|^2 $$ As $\Sigma^{1/2}$ is symmetric PSD matrix, $\Sigma^{1/2} = Q \Lambda Q^T$ for some orthogonal $Q$ and diagonal $\Lambda$, and $\Sigma = Q \Lambda^2 Q^T.$ So, continuing previous equalities, $$ \left\|\boldsymbol{W} \right\|^2 = \left<Q \Lambda^2 Q^T \boldsymbol{Y}, \boldsymbol{Y} \right> + 2 \left<Q \Lambda Q^T \boldsymbol{Y}, \boldsymbol{\nu}\right> + \left\|\boldsymbol{\nu}\right\|^2 = \\ = \left<\Lambda^2 Q^T \boldsymbol{Y}, Q^T \boldsymbol{Y} \right> + 2 \left<\Lambda Q^T \boldsymbol{Y}, Q^T\boldsymbol{\nu}\right> + \left\|\boldsymbol{\nu}\right\|^2. $$ As standard Gaussian distribution is spherically symmetric, $Q^T \boldsymbol{Y}$ and $\boldsymbol{Y}$ have equal distributions ($Q^T \boldsymbol{Y} \overset{d}{=} \boldsymbol{Y}$), so $$ \left\|\boldsymbol{W} \right\|^2 \overset{d}{=} \left<\Lambda^2 \boldsymbol{Y}, \boldsymbol{Y} \right> + 2 \left<\Lambda \boldsymbol{Y}, Q^T\boldsymbol{\nu}\right> + \left\|\boldsymbol{\nu}\right\|^2 = \\ = \sum_{i} \lambda_i^2 \boldsymbol{Y}_i^2 + 2 \sum_{i} \lambda_i \left(Q^T\boldsymbol{\nu}\right)_i \boldsymbol{Y}_i + \left\|\boldsymbol{\nu}\right\|^2, $$ where $\lambda_i$ is the $i$-th diagonal element of $\Lambda$. Each $\boldsymbol{Y}_i^2$ follows $\chi^2_1$ distribution and $2 \sum_{i} \lambda_i \left(Q^T\boldsymbol{\nu}\right)_i \boldsymbol{Y}_i + \left\|\boldsymbol{\nu}\right\|^2$ is Gaussian with mean $\left\|\boldsymbol{\nu}\right\|^2$ and variance $4 \left\|\Lambda Q^T\boldsymbol{\nu}\right\|^2$.
EDIT: what I've written before about generalized chi-squared distribution is not applicable, since gaussian term is not independent from chi-squared ones.