You are right. The integral
$$\Gamma \left( x \right) = \int\limits_0^\infty {s^{x - 1} e^{ - s} ds},\,\,x\in\mathbb{R}\tag{1}$$
converges only if $x>0$. Therefore the definition only works for positive $x$. However, one can use the functional equation
$$\Gamma(x+1)=x\Gamma(x)$$
to give a meaning to $\Gamma(x)$ also when $x$ is negative. Namely, suppose $-1<x<0$. Then $x+1>0$, so $\Gamma(x+1)$ is defined by $(1)$. Now we define $\Gamma(x)$ (for which the formula $(1)$ doesn't make sense) as
$$\Gamma(x):=\frac{\Gamma(x+1)}{x}\tag{2}$$
You can already see that this definition makes no sense for $x=0$.
In the same way we can define $\Gamma(x)$ for all $x\in\mathbb{R}$ (except $0,-1,-2,\dots$)
Namely, if $-2<x<-1$, then $-1<x+1<0$, so we already know what $\Gamma(x+1)$ is by $(2)$, hence
$$\Gamma(x):=\frac{\Gamma(x+1)}{x}=\frac{\Gamma(x+2)}{x(x+1)}$$
In general, if $x>-n$, then
$$\Gamma(x):=\frac{\Gamma(x+n)}{x(x+1)\cdots(x-n+1)}$$
So you were interested in $\Gamma(-1/2)$. We get,
$$\Gamma(-1/2)=-2\Gamma(1/2)$$
which you can now calculate using $(1)$.
I think this succumbs to applications of Dirichlet's test and some estimates based on the mean value theorem.
Dirichlet's test gives conditional convergence of a sum $\sum_{n \geq 1} a_n b_n$ provided that $a_n$ is a sequence of positive numbers that decrease to $0$ and the sequence of partial sums $B_n = \sum_{k=1}^n b_k$ is uniformly bounded in absolute value. In this case, take $a_n = n^{-\sigma}$ and $b_n = (-1)^{n+1} n^{-it}$. Clearly the $a_n$ decrease to $0$ if $\sigma > 0$, so we just need to get the uniform boundedness of the $B_n$.
It suffices to bound $\sum_{k=1}^n (2k-1)^{-it} - (2k)^{-it}$. Clearly this is bounded for $t=0$, so assume $t\neq 0$. The real and imaginary parts are
$$C_n = \sum_{k=1}^n \cos(t\log(2k-1)) - \cos(t\log(2k)),$$
$$S_n = \sum_{k=1}^n \sin(t\log(2k)) - \sin(t\log(2k-1)).$$
Let's show how to bound the imaginary part $S_n$; the real part is bounded by a wholly analogous technique.
By the mean value theorem, the $k$-th term of $S_n$ is
$$\frac{t\cos(t\log(x))}{x}$$
for some $x \in [2k-1, 2k]$. This is not much different from $\frac{t\cos(t\log(2k-1))}{2k-1}$; in fact, the difference between them is, again by the mean value theorem,
$$-\frac{t^2 \sin(t\log(y)) + t\cos(t\log(y))} {y^2} (x - (2k - 1))$$
for some $y \in [2k-1, x]$, whose absolute value is bounded above by $\frac{t^2+t}{(2k-1)^2}$. Since $\sum_{k\geq 1} \frac1{(2k-1)^2}$ converges, this reduces us to putting a uniform bound on the absolute value of
$$\sum_{k=1}^n \frac{\cos(t\log(2k-1))}{2k-1} = \text{Re}\sum_{k=1}^n \frac1{(2k-1)^{1 + it}}.$$
Put $z = 1 + it$; we may as well bound the absolute value of
$$\sum_{k=1}^n \frac1{(2k-1)^z} = \sum_{k=1}^n \left[\int_k^{k+1} \frac1{(2k-1)^z} - \frac1{(2x-1)^z} dx\right] + \int_1^{n+1} \frac1{(2x-1)^z} dx.$$
The last integral is easily evaluated as $\left[\frac{(2x-1)^{-it}}{-2it}\right]_1^{n+1}$ which in absolute value is bounded above by $\frac1{t}$. We turn now to the preceding sum of integrals. The integrands are bounded thus:
$$\left|(2k-1)^{-z} - (2x-1)^{-z}\right| = \left|\int_{2k-1}^{2x-1} \frac{z}{t^{z+1}}\; dt\right| \leq \frac{2|z|}{(2k-1)^{\text{Re}(z) + 1}}.$$
Thus the sum of the integrals is bounded by the finite quantity $2|1+it| \sum_{k=1}^\infty \frac1{(2k-1)^2}$, and we are done.
Best Answer
Since $\frac z n \rightarrow 0$ as $n \rightarrow+\infty$, $$\log(1+\frac z n) =\frac z n +\mathcal O\left ( \frac 1 {n^2}\right )$$ Likewise, $$z\log(1+\frac 1 n) =\frac z n +\mathcal O\left ( \frac 1 {n^2}\right )$$ Finally, $$z\log(1+\frac 1n)-\log(1+\frac z n) =\mathcal O\left ( \frac 1 {n^2}\right )$$ And therefore the sum converges for all $z$ where its terms are defined, that is if $z$ is not a negative integer.