We have $X_1\sim N(\mu_1,\sigma^2)$ and $X_2\sim N(\mu_2,\sigma^2)$, hence
$$EY_1=E(-X_1/\sqrt{2}+X_2/\sqrt{2})=-1/\sqrt{2}EX_1+1/\sqrt{2}EX_2=0$$
\begin{align*}
EY_1^2&=E(-X_1/\sqrt{2}+X_2/\sqrt{2})^2\\\\
&=E(X_1/\sqrt{2})^2-2E(X_1X_2/2)+E(X_2/\sqrt{2})^2\\\\
&=1/2\sigma^2+1/2\sigma^2=\sigma^2
\end{align*}
Hence $Y_1\sim N(0,\sigma^2)$ since it is the linear combination of normal variables.
Similarly we get $Y_2\sim N(0,\sigma^2)$ and $Y_3\sim N(0,\sigma^2)$
Now
$$EY_1Y_2=1/\sqrt{6}E(X_1)^2-1/\sqrt{6}EX_2^2=0$$
and similarly $EY_2Y_3=EY_1Y_3=0$, hence $Y_1$, $Y_2$ and $Y_3$ are independent, since for normal variables independece coincided with zero correlation.
Having established that we have
$$(Y_1^2+Y_2^2+Y_3^2)/\sigma^2=\left(\frac{Y_1}{\sigma}\right)^2+\left(\frac{Y_2}{\sigma}\right)^2+\left(\frac{Y_3}{\sigma}\right)^2=Z_1^2+Z_2^2+Z_3^2$$,
where $Z_i=Y_i/\sigma$. Since $Y_i\sim N(0,\sigma^2)$, we have $Z_i\sim N(0,1)$.
We have showed that our quantity of interest is a sum of squares of 3 independent standard normal variables, which by definition is $\chi^2$ with 3 degrees of freedom.
As I've said in the comments you do not need to calculate the densities. If you on the other hand want to do that, your formula is wrong. Here is why. Denote by $G(x)$ distribution of $Y_1^2$ and $F(x)$ the distribution of $Y_1$. Then we have
$$G(x)=P(Y_1^2<x)=P(-\sqrt{x}<Y_1<\sqrt{x})=F(\sqrt{x})-F(-\sqrt{x})$$
Now the density of $Y_1^2$ is $G'(x)$, so
$$G'(x)=\frac{1}{2\sqrt{x}}(F'(\sqrt{x})+F'(-\sqrt{x})$$
We have that
$$F'(x)=\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{x^2}{\sigma^2}},$$
so
$$G'(x)=\frac{1}{\sigma\sqrt{2\pi x}}e^{-\frac{x}{2}}$$
If $\sigma^2=1$ we have a pdf of $\chi^2$ with one degree of freedom. (Note that for $Z_1$ instead of $Y_1$ the calculation is similar and $\sigma^2=1$ ) As @whuber pointed out, this is gamma distribution, and sums of independent gamma distributions is again gamma, the exact formula is provided in the wikipedia page.
simplify the term in the integral to
$T=e^{-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\sigma_x} )^2 -y)} y^{k/2-2} $
find the polynomial $p(y)$ such that
$[p(y)e^{-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\sigma_x} )^2 -y)}]'=p'(y)e^{-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\sigma_x} )^2 -y)} + p(y) [-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\sigma_x} )^2 -y)]' e^{-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\sigma_x} )^2 -y)} = T$
which reduces to finding $p(y)$ such that
$p'(y) + p(y) [-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\sigma_x} )^2 -y)]' = y^{k/2-2}$
or
$p'(y) -\frac{1}{2} p(y) (\frac{z \mu_x }{\sigma_x^2} y^{-2} \frac{z^2}{\sigma_x^2} y^{-3} -1)= y^{k/2-2}$
which can be done evaluating all powers of $y$ seperately
edit after comments
Above solution won't work as it diverges.
Yet, some others have worked on this type of product.
Using Fourrier transform:
Schoenecker, Steven, and Tod Luginbuhl. "Characteristic Functions of the Product of Two Gaussian Random Variables and the Product of a Gaussian and a Gamma Random Variable." IEEE Signal Processing Letters 23.5 (2016): 644-647.
http://ieeexplore.ieee.org/document/7425177/#full-text-section
For the product $Z=XY$ with $X \sim \mathcal{N}(0,1)$ and $Y \sim \Gamma(\alpha,\beta)$ they obtained the characteristic function:
$\varphi_{Z} = \frac{1}{\beta^\alpha }\vert t \vert^{-\alpha} exp \left( \frac{1}{4\beta^2t^2} \right) D_{-\alpha} \left( \frac{1}{\beta \vert t \vert } \right)$
with $D_\alpha$ Whittaker's function ( http://people.math.sfu.ca/~cbm/aands/page_686.htm )
Using Mellin transform:
Springer and Thomson have described more generally the evaluation of products of beta, gamma and Gaussian distributed random variables.
Springer, M. D., and W. E. Thompson. "The distribution of products of beta, gamma and Gaussian random variables." SIAM Journal on Applied Mathematics 18.4 (1970): 721-737.
http://epubs.siam.org/doi/10.1137/0118065
They use the Mellin integral transform. The Mellin transform of $Z$ is the product of the Mellin transforms of $X$ and $Y$ (see http://epubs.siam.org/doi/10.1137/0118065 or https://projecteuclid.org/euclid.aoms/1177730201). In the studied cases of products the reverse transform of this product can be expressed as a Meijer G-function for which they also provide and prove computational methods.
They did not analyze the product of a Gaussian and gamma distributed variable, although you might be able to use the same techniques. If I try to do this quickly then I believe it should be possible to obtain an H-function (https://en.wikipedia.org/wiki/Fox_H-function ) although I do not directly see the possibility to get a G-function or make other simplifications.
$M\lbrace f_Y(x) \vert s \rbrace = 2^{s-1} \Gamma(\tfrac{1}{2}k+s-1)/\Gamma(\tfrac{1}{2}k)$
and
$M\lbrace f_X(x) \vert s \rbrace = \frac{1}{\pi}2^{(s-1)/2} \sigma^{s-1} \Gamma(s/2) $
you get
$M\lbrace f_Z(x) \vert s \rbrace = \frac{1}{\pi}2^{\frac{3}{2}(s-1)} \sigma^{s-1} \Gamma(s/2) \Gamma(\tfrac{1}{2}k+s-1)/\Gamma(\tfrac{1}{2}k) $
and the distribution of $Z$ is:
$f_Z(y) = \frac{1}{2 \pi i} \int_{c-i \infty}^{c+i \infty} y^{-s} M\lbrace f_Z(x) \vert s \rbrace ds $
which looks to me (after a change of variables to eliminate the $2^{\frac{3}{2}(s-1)}$ term) as at least a H-function
what is still left is the puzzle to express this inverse Mellin transform as a G function. The occurrence of both $s$ and $s/2$ complicates this. In the separate case for a product of only Gaussian distributed variables the $s/2$ could be transformed into $s$ by substituting the variable $x=w^2$. But because of the terms of the chi-square distribution this does not work anymore. Maybe this is the reason why nobody has provided a solution for this case.
Best Answer
This is a brute force solution requiring just multivariable calculus.
It suffices to prove that the sample mean $$ \bar{X} = \frac{1}{n} \sum_{i=1}^n X_i $$ and the sample variance $$ S^2 = \frac{1}{n - 1} \sum_{i=1}^n \left(X_i - \bar{X}\right)^2 $$ are independent. Thus, it suffices to prove that the sample mean $\bar{X}$ is independent of the vector $$ (X_1 - \bar{X}, \ldots, X_n - \bar{X}). $$ Moreover, since $$ \begin{aligned} \sum_{i=1}^n (X_i - \bar{X}) &= \sum_{i=1}^n X_i - \sum_{i=1}^n \bar{X} \\ &= n \bar{X} - n \bar{X} \\ &= 0, \end{aligned} $$ and hence $$ X_1 - \bar{X} = -\sum_{i=2}^n (X_2 - \bar{X}), $$ it follows that $X_1 - \bar{X}$ can be recovered from just knowing $(X_2 - \bar{X}, \ldots, X_n - \bar{X})$.
Now consider the joint density $$ \begin{aligned} f_{(X_1, \ldots, X_n)}(x_1, \ldots, x_n) &= \left(2 \pi \sigma^2\right)^{-n/2} \exp\left(-\sum_{i=1}^n \frac{1}{2}\left(\frac{x_i - \mu}{\sigma}\right)^2\right) \\ &= \left(2 \pi \sigma^2\right)^{-n/2} \exp\left(-\sum_{i=1}^n \frac{1}{2}\left(\frac{x_i - \bar{x}}{\sigma}\right)^2 - \frac{n}{2}\left(\frac{\bar{x} - \mu}{\sigma}\right)^2\right) \\ &= \underbrace{\left(2 \pi \sigma^2\right)^{-n/2}}_{\text{constant}} \underbrace{\exp\left(-\sum_{i=1}^n \frac{1}{2}\left(\frac{x_i - \bar{x}}{\sigma}\right)^2\right)}_{\text{depends only on $(x_2-\bar{x},\ldots,x_n-\bar{x})$}} \underbrace{\exp\left(-\frac{n}{2}\left(\frac{\bar{x} - \mu}{\sigma}\right)^2\right)}_{\text{depends only on $\bar{x}$}}. \end{aligned} $$ To get from $(X_1,\ldots,X_n)$ to $(\bar{X}, X_2 - \bar{X}, \ldots, X_n - \bar{X})$, consider the diffeomorphism $T : \mathbb{R}^n \to \mathbb{R}^n$ given by $$ T(x_1, \ldots, x_n) = (\bar{x}, x_2 - \bar{x}, \ldots, x_n - \bar{x}). $$ ($T$ is a diffeomorphism since it's clearly differentiable and its inverse is given by $$ T^{-1}(y_1, \ldots, y_n) = \left(n y_1 - \sum_{i=2}^n y_i, y_2 + y_1, \ldots, y_n + y_1\right), $$ which is also clearly differentiable). Up to transpose, the Jacobian matrix of $T$ is $$ DT(x_1, \ldots, x_n) = \begin{bmatrix} 1/n & 1/n & 1/n & \cdots & 1/n \\ -1/n & (n - 1) / n & -1/n & \cdots & -1/n \\ -1/n & -1/n & (n - 1) / n & \cdots & -1/n \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ -1/n & -1/n & -1/n & \cdots & (n - 1)/n. \end{bmatrix}, $$ which doesn't depend on $x_1, \ldots, x_n$. Thus, the determinant of $DT$ is some constant $C$. Now the joint density of $(\bar{X}, X_2 - \bar{X}, \ldots, X_n - \bar{X})$ satisfies $$ f_{(\bar{X}, X_2 - \bar{X}, \ldots, X_n - \bar{X})}(y_1, \ldots, y_n) = |C| f_{(X_1, \ldots, X_n)}(T^{-1}(y_1, \ldots, y_n)) $$ which factors as a function of $y_1$ times a function of $(y_2, \ldots, y_n)$ by what was shown above.
Therefore, $\bar{X}$ and $(X_2 - \bar{X}, \ldots, X_n - \bar{X})$ are independent.