If you see this as a Mellin transform with $s=1$, then by the Ramanujan master theorem you are talking about the integral being
$$
\int_0^\infty x^{s-1} \;_2F_1(a,b;c;-x)^2 \; dx = \Gamma(s) C_{-s}
$$
for the power series parameterised as
$$
_2F_1(a,b;c;-x)^2 = \sum_{k=0}^\infty \frac{(-1)^k}{k!}C_k x^k
$$
but it depends on the RMT still holding for this product of power series. I find this is why the negative argument in the hypergeometric function works well from $(-1)^k x^k = (-x)^k$. So perhaps think about the Cauchy product
$$
\left(\sum_{i=0}^\infty \frac{(a)_i (b)_i}{(c)_i i!} (-x)^i\right)\left(\sum_{j=0}^\infty \frac{(a)_j (b)_j}{(c)_j j!} (-x)^j\right)
$$
also you might want to rewrite the $\pi\csc(\pi s)$ terms in the form $\Gamma(s)\Gamma(1-s)$ to spot patterns. I have a note about spotting patterns in the gamma functions, I'll see if it can find it...
Edit: The following might be useful if there is a way to consider a 'confluence' of sorts from integrals of the form
$$
\int_0^\infty \int_0^\infty x_1^{s_1-1} x_2 ^{s_2-1} f_1(x_1) f_2(x_2) \; dx_1 dx_2 \to \int_0^\infty x^{s-1} f(x) f(x) \; dx
$$
we could view this as a multidimensional Mellin transform, but I have found there can be different results from the order of integration. If some Fubini type condition is there, then:
If functions $f_k(x)$ have Mellin transforms $g_k(s)$ and coefficients are included then this nested $D$ dimensional type Mellin transform of the product of the functions is given by
$$
\mathcal{M}_D\left[\prod_{k=1}^n f_k\left(\alpha_k \prod_{l=1}^n x_l^{a_{kl}}\right) \right] = \frac{\prod_{k=1}^n \alpha_k^{-(A^\top)^{-1}_k \mathbf{s}}}{|\det(A)|}\prod_{k=1}^n g_k((A^\top)^{-1}_k \mathbf{s})
$$
where $A_{kl}=a_{kl}$.
An example
Solve
$$
I = \int_0^\infty \int_0^\infty \int_0^\infty x_1^{s_1-1} x_2^{s_2-1} x_3^{s_3-1} e^{-\frac{\alpha x_1 x_2}{x_3}}J_n(\beta x_1^2 x_2)\mathrm{Ai}(\gamma x_3) \; dx_1 dx_2 dx_3
$$
with Bessel function $J_n(x)$, Airy function $\mathrm{Ai}(x)$. We have that $f_1(x) = e^{-x}$,$f_2(x) = J_n(x)$, $f_3(x) = \mathrm{Ai}(x)$. We look up that
$$
g_1(s) = \Gamma(s)
$$
$$
g_2(s) = \frac{2^{s-1} \Gamma \left(\frac{n}{2}+\frac{s}{2}\right)}{\Gamma \left(\frac{n}{2}-\frac{s}{2}+1\right)}
$$
$$
g_3(s) = \frac{3^{\frac{2 s}{3}-\frac{7}{6}} \Gamma \left(\frac{s}{3}+\frac{1}{3}\right) \Gamma \left(\frac{s}{3}\right)}{2 \pi }
$$
we inspect the integrand and find the coefficient matrix
$$
A = \begin{bmatrix} 1 & 1 & -1 \\ 2 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}, \;\; (A^\top)^{-1} = \begin{bmatrix} -1 & 2 & 0 \\ 1 & -1 & 0 \\ -1 & 2 & 1 \end{bmatrix}, \;\; \det(A) = -1
$$
we have
$$
I = \alpha^{s_1 - 2 s_2}\beta^{s_2-s_1}\gamma^{s_1-2 s_2-s_3} \Gamma(2s_2-s_1) \frac{2^{s_1-s_2-1} \Gamma \left(\frac{n}{2}+\frac{s_1-s_2}{2}\right)}{\Gamma \left(\frac{n}{2}-\frac{s_1-s_2}{2}+1\right)} \frac{3^{\frac{2 (2s_2-s_1+s_3)}{3}-\frac{7}{6}} \Gamma \left(\frac{(2s_2-s_1+s_3)}{3}+\frac{1}{3}\right) \Gamma \left(\frac{(2s_2-s_1+s_3)}{3}\right)}{2 \pi }
$$
Implication
When I see your example result, there are immediate patterns that hint at this linear combination of variables type approach
$$
A = -\frac{\pi ^{3/2} 2^{-2 a-2 b+3} \csc (2 \pi a) \csc (2 \pi b) \cos (\pi (a+b)) \Gamma \left(-a-b+\frac{3}{2}\right) \Gamma (a+b-1)}{\Gamma (2-2 a) \Gamma (a)^2 \Gamma (2-2 b) \Gamma (b)^2}
$$
for example $2^{-2 a - 2 b + 3}=4^{-a-b+3/2}$ and the linear combination is $-a-b+3/2$ as seen in the gamma function. One possible goal is to split your expression apart into the product of $N$ distinct Mellin transforms and reverse engineer the original integral as a product of simpler integrals?
We can rewrite your first result using
$$
\cos\left(\frac{\pi s}{2}\right) = \frac{\pi}{\Gamma\left(\frac{1}{2} + \frac{s}{2}\right)\Gamma\left(\frac{1}{2}-\frac{s}{2}\right)}
$$
and
$$
\pi \csc(\pi s) = \Gamma(s)\Gamma(1-s)
$$
to
$$
A = -\pi\csc (2 \pi a) \pi\csc (2 \pi b) \cos (\frac{\pi}{2} (2a+2b)) \frac{4^{-a-b+3/2}}{\pi^{1/2}}\frac{\Gamma \left(-a-b+\frac{3}{2}\right) \Gamma (a+b-1)}{\Gamma (2-2 a) \Gamma (a)^2 \Gamma (2-2 b) \Gamma (b)^2}
$$
$$
A = -
\pi^{1/2} 4^{-a-b+3/2}\frac{\Gamma(2a)\Gamma(1-2a) \Gamma(2b)\Gamma(1-2b)}{\Gamma\left(\frac{1}{2} + a+b\right)\Gamma\left(\frac{1}{2}-a-b\right)} \frac{\Gamma \left(-a-b+\frac{3}{2}\right) \Gamma (a+b-1)}{\Gamma (2-2 a) \Gamma (a)^2 \Gamma (2-2 b) \Gamma (b)^2}
$$
As a base observation
$$
\int_0^\infty \int_0^\infty x_1^{s_1-1} x_2 ^{s_2-1} \;_2F_1(a,b;c;-x_1)\;_2F_1(a,b;c;-x_2) \; dx_1 dx_2 = \frac{\Gamma (c)^2 \Gamma (\text{s1}) \Gamma (\text{s2}) \Gamma (a-\text{s1})
\Gamma (a-\text{s2}) \Gamma (b-\text{s1}) \Gamma (b-\text{s2})}{\Gamma (a)^2
\Gamma (b)^2 \Gamma (c-\text{s1}) \Gamma (c-\text{s2})}
$$
I feel there might be a parameterisation with $s_1 =a+b-1$ and $s_2=3/2-a-b$, therefore $\Gamma(1/2+a+b) = \Gamma(3/2+s_1)$ and $\Gamma(1/2-a-b)=\Gamma(s_2-1)$. But I have fried my brain for the time being...
I think I can show you another way to prove the following claim :
Claim :
$$\mathbb{E}\left(\frac{X}{\|X\|}\right)=\frac{\Gamma(\frac{n+1}{2})}{(2\sigma^2)^{1/2}\Gamma(\frac{n+2}{2})}{}_1F_1\bigg(\frac{1}{2};\frac{n+2}{2};-\frac{\|\mu\|^2}{2\sigma^2}\bigg)\mu$$
Proof :
Using River Li's answer, we have
$$\begin{align}
\mathbb{E}\left(\frac{X}{\|X\|}\right)
&= \mathbb{E}\left(\frac{X}{(\|X\|^2)^{1/2}}\right)
\\\\&=\mathbb{E}\left(X \cdot \frac{1}{\Gamma(1/2)}\int_0^\infty \mathrm{e}^{-z\|X\|^2}z^{-1/2}\,dz \right)
\\\\&= \frac{1}{\sqrt{\pi}}\int_0^\infty \mathbb{E}\left(X\mathrm{e}^{-z\|X\|^2}\right)z^{-1/2}\,dz
\\\\&= \mu \cdot \frac{1}{\sqrt{\pi}}\int_0^\infty \frac{e^{-\frac{\|\mu\|^2 z}{2\sigma^2 z + 1}} \cdot z^{-1/2}}{(2\sigma^2 z + 1)^{(n+2)/2}}dz\end{align}$$
Letting $t=\frac{1}{2\sigma^2 z + 1}$, we have $\frac{dt}{dz}=-2\sigma^2 t^2$, so we obtain
$$\mathbb{E}\left(\frac{X}{\|X\|}\right)=\mu\cdot \frac{e^{-\frac{\|\mu\|^2}{2\sigma^2}}}{(2\sigma^2)^{1/2} \sqrt{\pi}}\int_0^1 e^{\frac{\|\mu\|^2t}{2\sigma^2}}t^{\frac{n-1}2}(1-t)^{-1/2}dt$$
According to here, we can say that if $\text{Re}(b)\gt \text{Re}(a)\gt 0$, then we have
$$\int_0^1e^{zt}t^{a-1}(1-t)^{-a+b-1}dt=\Gamma(a)\Gamma(b-a)\ _{1}\bar{F}_1(a;b;z)$$
So, taking $a=\frac{n+1}{2},b=\frac{n+2}{2}$ and $z=\frac{\|\mu\|^2}{2\sigma^2}$, we have
$$\mathbb{E}\left(\frac{X}{\|X\|}\right)=\mu\cdot\frac{e^{-\frac{\|\mu\|^2}{2\sigma^2}}\Gamma(\frac{n+1}{2})}{(2\sigma^2)^{1/2}}{}_1\bar{F}_1\bigg(\frac{n+1}{2};\frac{n+2}{2};\frac{\|\mu\|^2}{2\sigma^2}\bigg)$$
Since $_{1}\bar{F}_{1}(a;b;z)=\frac{1}{\Gamma(b)}\ _{1}F_{1}(a;b;z)$, we have
$$\mathbb{E}\left(\frac{X}{\|X\|}\right)= \mu\cdot\frac{e^{-\frac{\|\mu\|^2}{2\sigma^2}}\Gamma(\frac{n+1}{2})}{(2\sigma^2)^{1/2}\Gamma(\frac{n+2}{2})}{}_1F_1\bigg(\frac{n+1}{2},\frac{n+2}{2};\frac{\|\mu\|^2}{2\sigma^2}\bigg)$$
Since $_{1}F_1(a;b;z)=e^z\ _{1}F_1(b-a;b;-z)$, we finally get
$$\mathbb{E}\left(\frac{X}{\|X\|}\right)=\frac{\Gamma(\frac{n+1}{2})}{(2\sigma^2)^{1/2}\Gamma(\frac{n+2}{2})}{}_1F_1\bigg(\frac{1}{2};\frac{n+2}{2};-\frac{\|\mu\|^2}{2\sigma^2}\bigg)\mu.\ \blacksquare$$
Added :
There is an error in step 2 since it is wrong that
$$\int_{1}^{-1} z \left(-e^{a z}\right) \left(1-z^2\right)^{\frac{n}{2}-\frac{1}{2}} dz = \frac{1}{2} \sqrt{\pi }\ a\ \Gamma\left(\frac{n+1}{2}\right) {}_0F_1\left(\frac{n}{2}+2;\frac{a^2}{4}\right)$$
According to WolframAlpha, for $(a,n)=(2,1)$, we have
$$LHS=\frac{e^4+3}{4e^2}\not=\frac{3\sqrt{\pi}}{4}\times\frac{e^4+3}{4e^2}=RHS$$
My conjecture is
$$\int_{1}^{-1} z \left(-e^{a z}\right) \left(1-z^2\right)^{\frac{n}{2}-\frac{1}{2}} dz = \frac{1}{2}\sqrt{\pi}\ a\ \frac{\Gamma\left(\frac{n+1}{2}\right)}{\color{red}{\Gamma\left(\frac{n+4}{2}\right)}}\ {}_0F_1\left(\frac{n}{2}+2;\frac{a^2}{4}\right)\tag1$$
According to WolframAlpha, for small $n$, $(1)$ holds.
However, I don't know how to prove that the conjecture is true.
Best Answer
Define the function $\mathcal{I}:\mathbb{R}_{>0}\rightarrow\mathbb{R}$ via the definite integral
$$\mathcal{I}{\left(a\right)}:=\int_{0}^{\operatorname{arsinh}{\left(a\right)}}\mathrm{d}\eta\,\frac{1}{\left[\sinh{\left(\eta\right)}\right]^{2/3}}.$$
You're probably going to kick yourself when realize what the first substitution should be, but sometimes the most obvious paths are most overlooked.
Given $a\in\mathbb{R}_{>0}$, we obtain
$$\begin{align} \mathcal{I}{\left(a\right)} &=\int_{0}^{\operatorname{arsinh}{\left(a\right)}}\mathrm{d}\eta\,\frac{1}{\left[\sinh{\left(\eta\right)}\right]^{2/3}}\\ &=\int_{0}^{a}\mathrm{d}x\,\frac{1}{x^{2/3}\sqrt{1+x^{2}}};~~~\small{\left[\eta=\operatorname{arsinh}{\left(x\right)}\right]}\\ &=\int_{0}^{a^{2}}\mathrm{d}y\,\frac{1}{2y^{5/6}\sqrt{1+y}};~~~\small{\left[x=\sqrt{y}\right]}\\ &=\int_{0}^{1}\mathrm{d}t\,\frac{a^{2}}{2a^{5/3}t^{5/6}\sqrt{1+a^{2}t}};~~~\small{\left[y=a^{2}t\right]}\\ &=\frac{a^{1/3}}{2}\int_{0}^{1}\mathrm{d}t\,\frac{1}{t^{5/6}\sqrt{1+a^{2}t}}.\\ \end{align}$$
You should now have no problem translating this into the hypergeometric form using the first integral representation you listed, so I leave you to take it from here.