3.471.9
I will start with 3.471.9, since it is much simpler. The key ingredient
Slater theorem, saying that
$$
\int_0^\infty t^{\alpha-1} f(t) g\left(\frac{x}{t}\right) \mathrm{d} t = \frac{1}{2 \pi i} \int_{\gamma - i \infty}^{\gamma + i \infty} \mathcal{M}_f(s+\alpha) \mathcal{M}_g(s) x^{-s} \mathrm{d} s = \mathcal{M}^{-1} \left\{ \mathcal{M}_f(s+\alpha) \mathcal{M}_g(s) \right\}(x)
$$
where $\mathcal{M}_f(s) = \int_0^\infty t^{s-1} f(t) \mathrm{d} t$ is the Mellin transform of $f$, valid within some strip $\min < \Re(s) < \max $, and $\gamma$ is a real constant, such that $\gamma$ is within the strip of validity of the Mellin transform of $g$, and $\gamma + \Re(\alpha)$ in within the strip of the Mellin transform of $f$.
The proof is easy:
$$ \begin{eqnarray}
\int_0^\infty t^{\alpha-1} f(t) g\left(\frac{x}{t}\right) \mathrm{d} t &=& \int_0^\infty t^{\alpha-1} f(t) \left( \frac{1}{2 \pi i} \int_{\gamma - i \infty}^{\gamma+\infty} \mathcal{M}_g(s) x^{-s} t^s \mathrm{d} s \right) \mathrm{d} t \\ &=& \frac{1}{2 \pi i} \int_{\gamma - i \infty}^{\gamma+\infty} \left( \int_0^\infty t^{s+\alpha-1} f(t) \mathrm{d} t \right) \mathcal{M}_g(s) x^{-s} \mathrm{d} s \\
&=& \frac{1}{2 \pi i} \int_{\gamma - i \infty}^{\gamma+\infty} \mathcal{M}_f(s+\alpha) \mathcal{M}_g(s) x^{-s} \mathrm{d} s
\end{eqnarray}
$$
We now apply this theorem to 3.471.9, using Cahen-Mellin integral $\mathcal{M}_{\exp(-\bullet)}(s) = \Gamma(s) $:
$$\begin{eqnarray}
\int_0^\infty t^{\alpha-1} \exp\left(-y t\right) \exp\left( - \frac{x}{t}\right) \mathrm{d} t &=& \frac{1}{2 \pi i} \int_{\gamma - i \gamma}^{\gamma + i \infty} \Gamma(s+\alpha) \Gamma(s) y^{-\alpha -s} x^{-s} \mathrm{d}s \\ &=& y^{-\alpha} \left( \frac{1}{2 \pi i} \int_{\gamma - i \gamma}^{\gamma + i \infty} \Gamma(s+\alpha) \Gamma(s) \left( x y \right)^{-s} \mathrm{d}s \right) \\ &=& y^{-\alpha} \left( 2 (x y)^{\alpha/2} K_{\alpha}(2 \sqrt{x y} ) \right) \\
&=& 2 \left( \frac{x}{y} \right)^{\alpha/2} K_\alpha\left(2 \sqrt{x y}\right)
\end{eqnarray}
$$
where DMLF 10.43.19 was used.
6.635.3
A formula of kin is worked out in Bateman, Erdelyi et al, "Higher Transcendental Functions", chapter 7, section 7.7.6 on Macdonald's and Nochilson's formulas, formula (37). It says:
$$
\int_0^\infty \exp\left(-\frac{t}{2} - \frac{x^2+X^2}{2 t} \right) I_\nu \left( \frac{x X}{t} \right) \frac{\mathrm{d} t}{t} = \cases{2 I_\nu\left( x \right) K_\nu\left( X \right) & x < X \\ 2 I_\nu\left( X \right) K_\nu\left( x \right) & x > X }
$$
By doing an analytic continuation $x \to \mathrm{e}^{i \pi/2} x$ of the (the only possible) first branch, we get
$$
\int_0^\infty \exp\left(-\frac{t}{2} - \frac{X^2-x^2}{2 t} \right) J_\nu \left( \frac{x X}{t} \right) \frac{\mathrm{d} t}{t} = 2 J_\nu(x) K_\nu(X)
$$
Now, performing reparameterization $X = \sqrt{2 \alpha \left( \sqrt{\beta^2 + \gamma^2}+ \beta \right)}$, $x = \sqrt{2 \alpha \left( \sqrt{\beta^2 + \gamma^2} - \beta \right)}$, accompanied by a change of variables $t \to \left(2 \alpha t\right)^{-1}$ we arrive at 6.635.3.
Derivation of the point of departure formula relies on two auxiliary results:
$$
\int_0^\infty J_{\nu}(x u) J_{\nu}(X u) \exp\left(-\frac{t u^2}{2}\right) u \mathrm{d} u = \frac{1}{t} \exp\left(-\frac{x^2+X^2}{2t}\right) I_\nu\left( \frac{x X}{t}\right)
$$
which is obtained by expanding the exponential in series and integrating term-wise. Then, using the above integral representation in the formula under consideration, and carrying out simple integration with respect to $t$, we get
$$
\int_0^\infty \frac{\mathrm{e}^{-t/2}}{t} \exp\left(-\frac{x^2+X^2}{2t}\right) I_\nu\left( \frac{x X}{t}\right) \mathrm{d} t =
\int_0^\infty J_\nu(x u) J_\nu(X u) \frac{2 u \mathrm{d} u}{1+u^2}
$$
The resulting integral is of Sonine-Gegenabuer-type, and equals to $2 I_\nu( \min(x,X)) K_\nu(\max(x,X))$.
Yes. If we start from
\begin{equation}
\frac{d^2y}{dx^2} + \frac{1-2\alpha}{x} \frac{dy}{dx} - \beta^2 y = 0,
\end{equation}
we write $\beta^{\alpha} y = \xi^{\alpha} z(\xi)$ with a new independent variable $\xi = \beta x$ and with a new dependent variable $z$. Computing the first two derivatives of $y$ by the product and chain rules yields
\begin{eqnarray}
\beta^{\alpha} \frac{dy}{dx} &=& \alpha \xi^{\alpha-1} \beta z + \xi^{\alpha} \beta \frac{dz}{d\xi},\\
\beta^{\alpha} \frac{d^2y}{dx^2} &=& \alpha(\alpha-1) \xi^{\alpha-2} \beta^2 z + 2 \alpha \xi^{\alpha-1} \beta^2 \frac{dz}{d\xi} + \xi^{\alpha} \beta^2 \frac{d^2z}{d\xi^2}.
\end{eqnarray}
We now obtain
\begin{eqnarray}
0 &=& \beta^{\alpha} \left( \frac{d^2y}{dx^2} + \frac{1-2\alpha}{x} \frac{dy}{dx} - \beta^2 y \right) = \beta^{\alpha} \frac{d^2y}{dx^2} + \frac{1-2\alpha}{\xi} \beta \beta^{\alpha} \frac{dy}{dx} - \beta^2 \beta^{\alpha} y\\
&=& \dots = \xi^{\alpha-2} \beta^2 \left( \xi^2 \frac{d^2z}{d\xi^2} + \xi \frac{dz}{d\xi} - (\xi^2 + \alpha^2) z \right).
\end{eqnarray}
So now we have the modified Bessel's equation $\xi^2 \frac{d^2z}{d\xi^2} + \xi \frac{dz}{d\xi} - (\xi^2 + \alpha^2) z = 0$ for $z$, and a fundamental system of solutions is given by $\{I_{\alpha}(\xi),K_{\alpha}(\xi)\}$ for any value of $\alpha$, or by $\{I_{\alpha}(\xi),I_{-\alpha}(\xi)\}$, if $\alpha \not \in \mathbb{Z}$.
Transforming back we obtain $y(x) = \beta^{-\alpha}(\beta x)^{\alpha} z(\beta x) = x^{\alpha} z(\beta x)$.
The only thing that I don't like about the Wolfram notation is that it might be misinterpreted in the way that $\{I_{\alpha}(\xi),K_{\alpha}(\xi)\}$ is a fundamental system of solutions only if $\alpha \in \mathbb{Z}$, which is not true.
Best Answer
You are right. The integral
$$\Gamma \left( x \right) = \int\limits_0^\infty {s^{x - 1} e^{ - s} ds},\,\,x\in\mathbb{R}\tag{1}$$
converges only if $x>0$. Therefore the definition only works for positive $x$. However, one can use the functional equation $$\Gamma(x+1)=x\Gamma(x)$$
to give a meaning to $\Gamma(x)$ also when $x$ is negative. Namely, suppose $-1<x<0$. Then $x+1>0$, so $\Gamma(x+1)$ is defined by $(1)$. Now we define $\Gamma(x)$ (for which the formula $(1)$ doesn't make sense) as
$$\Gamma(x):=\frac{\Gamma(x+1)}{x}\tag{2}$$
You can already see that this definition makes no sense for $x=0$. In the same way we can define $\Gamma(x)$ for all $x\in\mathbb{R}$ (except $0,-1,-2,\dots$) Namely, if $-2<x<-1$, then $-1<x+1<0$, so we already know what $\Gamma(x+1)$ is by $(2)$, hence
$$\Gamma(x):=\frac{\Gamma(x+1)}{x}=\frac{\Gamma(x+2)}{x(x+1)}$$
In general, if $x>-n$, then
$$\Gamma(x):=\frac{\Gamma(x+n)}{x(x+1)\cdots(x-n+1)}$$
So you were interested in $\Gamma(-1/2)$. We get,
$$\Gamma(-1/2)=-2\Gamma(1/2)$$
which you can now calculate using $(1)$.