Let's do it your way, and then let's do it another way that may or may not be preferable. Using your notation,
$$\begin{align*}
m_X(u) &= \sum_{x=0}^\infty e^{ux} \binom{r+x-1}{x} p^r (1-p)^x \\
&= \sum_{x=0}^\infty \binom{r+x-1}{x} p^r ((1-p)e^u)^x \\
&= \frac{p^r}{(1-(1-p)e^u)^r} \sum_{x=0}^\infty \binom{r+x-1}{x} (1 - (1-p)e^u)^r ((1-p)e^u)^x \\
&= \left(\frac{p}{1-(1-p)e^u}\right)^r. \end{align*}$$
Note that the lower index of summation should begin at $x = 0$ since the support of $X$ is $\{0, 1, 2, \ldots\}$. In the third step, I have pulled out a factor of $p^r$, and inserted a factor of $(1 - (1-p)e^u)^r$, neither of which depends on the variable of summation $x$. Why did I choose this factor to insert? The reason is that if we recall the PMF of a negative binomial distribution, $$\Pr[X = x] = \binom{r+x-1}{x} p^r (1-p)^x,$$ the relationship between the factors $p^r$ and $(1-p)^x$ are such that the bases must add to $1$. So long as $$0 < (1-p)e^u < 1,$$ we can think of this as a Bernoulli probability of a single trial; i.e., let $1-p^* = (1-p)e^u$, where $p^*$ is some "modified" Bernoulli probability of some other negative binomial random variable. In this way, we obtain the sum of probabilities of this "new" negative binomial random variable with parameters $p^*$ and $r$, and the sum of its probabilities over its support is also $1$.
What is the other method? Well, recall that a negative binomial random variable is simply the sum of $r$ independent and identically distributed geometric random variables; i.e., $$X = Y_1 + Y_2 + \cdots + Y_r,$$ where $Y \sim \operatorname{Geometric}(p)$, with PMF $$\Pr[Y = y] = p(1-p)^y, \quad y = 0, 1, 2, \ldots.$$ Also recall that the MGF of the sum of $r$ iid random variables is simply the MGF of one such random variable raised to the $r^{\rm th}$ power; i.e., $$m_X(u) = \left(m_Y(u)\right)^r.$$ Now if you already know that the MGF of the geometric distribution is $$m_Y(u) = \frac{p}{1-(1-p)e^u},$$ the result immediately follows. If you don't know this in advance, then you can derive it readily as follows: $$\begin{align*}
m_Y(u) &= \sum_{y=0}^\infty e^{uy} p (1-p)^y \\
&= p \sum_{y=0}^\infty ((1-p)e^u)^y \\
&= p \cdot \frac{1}{1-(1-p)e^u},
\end{align*}$$
where the last step is the consequence of the fact that the sum is an infinite geometric series with common ratio $(1-p)e^u$. Of course, this imposes the condition $|(1-p)e^u| < 1$, otherwise the series fails to converge. Since $e^u > 0$ for all $u$, and by construction $0 < p < 1$, it follows that $m_Y(u)$ is defined if and only if $u < -\log(1-p)$. And this restriction carries over into the MGF of the negative binomial distribution.
maybe we start with untangling some confusion.
The probability generating function is your function $G(z)$ and it is actually defined via expectation: $\mathbb E[z^X]$. However, what you are looking for is the moment generating function, which is similarly defined as follows:
$$M_X(t):=\mathbb E[exp(Xt)]=\int_{\mathbb R}exp(xt)f_X(t)dt$$
We can indeed use the tower property that you mentioned.
We first note: $$M_Z(t)=(1-p+p\cdot exp(t))^N$$
if $p$ where NOT random. How do we get rid of the randomness? That is doable via tower property:
$$M_Z(t)=\mathbb E[exp(Zt)]=\mathbb E[\mathbb E[exp(Zt)|Y]]$$
To compute that look at the quantity: $\mathbb E[exp(Zt)|Y=y]$. This has the form as mentioned above, but with p defined as given. Now this is a function $g(y)$ of y, a number for each fixed y. To compute the outer expectation with respect to the given exponential distribution, just integrate with respect to the exponential density:
$$M_Z(t)=\int_{\mathbb R^+}g(y)\lambda_0 \exp(-\lambda_0 y)dy.$$
If you need more explicit help, feel free to ask. Hope this helps without spoiling the fun ;)
EDIT: I just add my computation for the moment generating function for $N=1$. We keep in mind, how the exponential distribution is defined and we will utilize that to keep us from computing any integrals explicitly.
$$M_Z(t)=\mathbb E_Y[\left(1+(1-e^{-\lambda_1 Y})(e^t-1)\right)]$$
$$=\int_0^\infty\left(1+(1-e^{-\lambda_1 y})(e^t-1)\right)\lambda_0e^{-\lambda_0 y}dy$$
$$=\int_0^\infty\lambda_0e^{-\lambda_0y}dy+(e^t-1)\int_0^\infty(1-e^{-\lambda_1y})\lambda_0e^{-\lambda_0 y}dy$$
$$=1+(e^t-1)\left(1-\frac{\lambda_0}{\lambda_0+\lambda_1}\cdot\int_0^\infty(\lambda_0+\lambda_1)e^{-(\lambda_0+\lambda_1) y}dy\right)$$
$$=1+(e^t-1)\left(1-\frac{\lambda_0}{\lambda_0+\lambda_1}\right)
=1+(e^t-1)\cdot\frac{\lambda_1}{\lambda_0+\lambda_1}$$
$$=\frac{\lambda_0}{\lambda_0+\lambda_1}+e^t\frac{\lambda_1}{\lambda_0+\lambda_1}=q+e^t(1-q)$$
However using the binomial theorem for other values of $N$, will get us to $$M_Z(t)=(q+(1-q)e^t)^N$$
which just shows, that the $Z$ is binomial again, i.e. $Z\sim Bin(N,1-q)$. In other words, randomizing the success parameter of a binomial distribution with an exponential parameter does not change the type of distribution.
This explains also, why the inverse Fourier-transform yields something constant. The random variable $Z$ is just discrete and thus has no (absolutely continuous) density (with respect to Lebesgue-measure).
Best Answer
You are computing $e^{tX}$ and they are computing $Et^{X}$. Both calculations are correct but according to Wikipedia your definition is the correct one. What they have computed is usually called the 'generating function'. [MGF expands as 'moment generating function'; it should not be confused with 'generating function'].