The general proof of this can be found in Feller (An Introduction to Probability Theory and Its Applications, Vol. 2). It is an inversion problem involving Laplace transform theory. Did you notice that the mgf bears a striking resemblance to the Laplace transform?. For use of Laplace Transformation you can see Widder (Calcus Vol I) .
Proof of a special case:
Suppose that X and Y are random varaibles both taking only possible values in {$0, 1, 2,\dots, n$}.
Further, suppose that X and Y have the same mgf for all t:
$$\sum_{x=0}^ne^{tx}f_X(x)=\sum_{y=0}^ne^{ty}f_Y(y)$$
For simplicity, we will let $s = e^t$
and we will define $c_i = f_X(i) − f_Y (i)$ for $i = 0, 1,\dots,n$.
Now
$$\sum_{x=0}^ne^{tx}f_X(x)-\sum_{y=0}^ne^{ty}f_Y(y)=0$$
$$\Rightarrow \sum_{x=0}^ns^xf_X(x)-\sum_{y=0}^ns^yf_Y(y)=0$$
$$\Rightarrow \sum_{x=0}^ns^xf_X(x)-\sum_{x=0}^ns^xf_Y(x)=0$$
$$\Rightarrow\sum_{x=0}^ns^x[f_X(x)-f_Y(x)]=0$$
$$\Rightarrow \sum_{x=0}^ns^xc_x=0~∀s>0$$
The above is simply a polynomial in s with coefficients $c_0, c_1,\dots,c_n$. The only way it can be zero for all values of s is if $c_0=c_1=\cdots= c_n=0$.So, we have that $0=c_i=f_X(i)−f_Y(i)$ for $i=0, 1,\dots,n$.
Therefore, $f_X(i)=f_Y(i)$ for $i=0,1,\dots,n$.
In other words the density functions for $X$ and $Y$ are exactly the same. In other other words, $X$ and $Y$ have the same distributions.
Since $0 < \delta <1$, then $\log \delta <0$. So the density of $V$ is
$$g(v) = f\left(\frac{\log v}{\log \delta}\right)\cdot \left|\frac{\partial \tau }{\partial V }\right|
= \lambda \exp\left\{-\lambda \frac{\log v}{\log \delta}\right\}\frac{1}{v |\log \delta|} \\= \frac {\lambda}{|\log \delta|}\frac 1v\exp\left\{ \frac{\lambda}{|\log \delta|}\log v\right\}$$
Set $\alpha \equiv \frac{\lambda}{|\log \delta|}$. Then, manipulating,
$$g(v) = \alpha v^{\alpha-1}, \;\;v\in [0,1] $$
which is the density of a $\text{Beta}(\alpha,1)$ distribution.
The moment generating function is
$$MGF_V(\alpha,1,t) = 1 +\sum_{k=1}^{\infty} \left( \prod_{r=0}^{k-1} \frac{\alpha+r}{\alpha+r+1} \right) \frac{t^k}{k!}$$
and which, among other things provides a nice recursive formula
$$E[V^s] = \frac {\alpha +s-1}{\alpha+s}E[V^{s-1}]$$
Best Answer
The answer is given in whubers comment, I will write it out with details.
First note that $\frac{x(x-1)\dots (x-r+1)}{x!} = \frac{1}{(x-r)!}$. Using that, $$ \DeclareMathOperator{\E}{\mathbb{E}} \E X(X-1) \dots (X-r+1) = \sum_{k=0}^\infty k(k-1)\dots (k-r+1) e^{-\lambda} \frac{\lambda^k}{k!} \\ = \lambda^r \sum_{k=r}^\infty e^{-\lambda}\frac{\lambda^{k-r}}{(k-r)!} \\ = \lambda^r \sum_{k=0}^\infty e^{-\lambda} \frac{\lambda^k}{k!} \\ = \lambda^r $$