You may be interested in the paper Locating the zeros of partial sums of $\exp(z)$ with Riemann-Hilbert Methods by T. Kriecherbauer, A.B.J. Kuijlaars, K.D.T-R McLaughlin, and P.D. Miller (arXiv preprint available here). In section 4 they give asymptotic series for the zeros in terms of the images of the roots of unity through the map $z \mapsto -W(-z/e)$.
I'm not familiar with their methods, but I do know of another way to find asymptotic approximations for the zeros of $s_n(nz)$ which stay away from the point $z=1$ (that is, which remain in a compact subset of the punctured plane $\mathbb{C} \setminus \{1\}$ as $n \to \infty$).
The zeros of $s_n(nz)$ satisfy the asymptotic equation
$$
\left(ze^{1-z}\right)^n = \sqrt{2\pi n} \frac{1-z}{z} \Bigl(1+\epsilon_n(z)\Bigr),
\tag{1}
$$
where $\epsilon_n(z) = O(1/n)$ as long as $z$ remains in a compact subset of $\operatorname{Re}(z) < 1$ (at least). By solving this equation for $z$ one may find asymptotic expressions for the individual zeros.
For instance, when $n$ is odd, $s_n(nz)$ has a single real zero $z_n$ which approaches
$$
z=-W(1/e) \approx -0.278465
$$
as $n \to \infty$. For convenience let's define
$$
w = W(1/e).
$$
According to the paper On the Zeroes of the Nth Partial Sum of the Exponential Series by S. Zemyan (JSTOR link), Szegő showed that
$$
z_n = -w - \frac{w}{(1+w)n} \log\left(\sqrt{2\pi n} \frac{1+w}{w}\right) + o\left(\frac{1}{n}\right)
\tag{2}
$$
as $n \to \infty$.
For this result Zemyan cites a book by Pólya and Szegő published in the 60s, though I'm sure Szegő wrote down something like this when he was originally investigating the zeros of these partial sums in the 20s.
In attempting to derive this result myself from equation $(1)$ I found the formula
$$
z_n = -w - \frac{w}{(1+w)n} \log\left(\sqrt{2\pi n} \frac{1+w}{w}\right) - \frac{w}{2(1+w)^3n^2} \left\{\frac{(\log n)^2}{4} + \left[\log\left(\sqrt{2\pi} \frac{1+w}{w}\right)-1\right]\log n\right\} + O\left(\frac{1}{n^2}\right),
\tag{3}
$$
which is a slight improvement on Szegő's approximation $(2)$. The calculation was tedious, to say the least, but the method can be generalized to find approximations for every such zero of $s_n(nz)$. Begin by writing $z = -W(-\zeta/e) + \delta$, where $\zeta$ is an $n^\text{th}$ root of $-1$, and solve $(1)$ for $\delta$ under the assumption that $\delta$ is small. (Note that in my calculation I chose $\zeta = -1$.)
In a sense this method was used in the paper Asymptotics for the zeros of the partial sums of $e^z$. I by A.J. Carpenter, R.S. Varga, and J. Waldvogel (Project Euclid link), though they didn't carry it through as such. I believe it was actually used prior to that in Carpenter's doctoral thesis.
Below is a plot of the numerical solutions to $s_{2n+1}((2n+1)z) = 0$ near $z=-W(1/e)$ as black dots, Szegő's approximation $(2)$ as a blue line, and the approximation in $(3)$ as a red line for $20 \leq n \leq 40$.
As copper.hat said in the comments, it is not possible to do a Taylor series in $x$, $y$ or $z$ around the point $(0,0,0)$ because the function has no derivative in that point.
What you can do in this point, however, is to do a Taylor expansion of just the exponential function, and insert the argument; this will give you an expansion in square roots:
$$\begin{aligned}
\mathrm e^{\sqrt{xy}+\sqrt{xz}+\sqrt{yz}}
&= \sum_{n=0}^\infty \frac{1}{n!}(\sqrt{xy}+\sqrt{xz}+\sqrt{yz})^n\\
&= 1 + \sqrt{xy}+\sqrt{xz}+\sqrt{yz}\\
&\phantom{= 1} + \frac12 xy+\frac12 xz+\frac12 yz + \sqrt{xy}\sqrt{xz}+\sqrt{xy}\sqrt{yz}+\sqrt{xz}\sqrt{yz} + \dots
\end{aligned}$$
Best Answer
The Problem
The main problem when computing $e^{-20}$ is that the terms of the series grow to $\frac{20^{20}}{20!}\approx43099804$ before getting smaller. Then the sum must cancel to be $\approx2.0611536\times10^{-9}$. In a floating point environment, this means that $16$ digits of accuracy in the sum are being thrown away due to the precision of the large numbers. This is the number of digits of accuracy of a double-precision floating point number ($53$ bits $\sim15.9$ digits).
For example, the RMS error in rounding $\frac{20^{20}}{20!}$, using double precision floating point arithmetic, would be $\sim\frac{20^{20}}{20!}\cdot2^{-53}/\sqrt3\approx3\times10^{-9}$. Since the final answer is $\approx2\times10^{-9}$, we lose all significance in the final answer simply by rounding that one term in the sum.
The problem gets worse with larger exponents. For $e^{-30}$, the terms grow to $\frac{30^{30}}{30!}\approx776207020880$ before getting smaller. Then the sum must cancel to be $\approx9.35762296884\times10^{-14}$. Here we lose $25$ digits of accuracy. For $e^{-40}$, we lose $33$ digits of accuracy.
A Solution
The usual solution is to compute $e^x$ and then use $e^{-x}=1/e^x$. When computing $e^x$, the final sum of the series is close in precision to the largest term of the series. Very little accuracy is lost.
For example, the RMS error in computing $e^{20}$ or $e^{-20}$, using double precision floating point arithmetic, would be $\sim8\times10^{-9}$; the errors are the same because both sums use the same terms, just with different signs. However, this means that using Taylor series, $$ \begin{align} e^{20\hphantom{-}}&=4.85165195409790278\times10^8\pm8\times10^{-9}\\ e^{-20}&=2\times10^{-9}\pm8\times10^{-9} \end{align} $$ Note that the computation of $e^{-20}$ is completely insignificant. On the other hand, taking the reciprocal of $e^{20}$, we get $$ e^{-20}=2.061153622438557828\times10^{-9}\pm3.4\times10^{-26} $$ which has almost $17$ digits of significance.