The general proof of this can be found in Feller (An Introduction to Probability Theory and Its Applications, Vol. 2). It is an inversion problem involving Laplace transform theory. Did you notice that the mgf bears a striking resemblance to the Laplace transform?. For use of Laplace Transformation you can see Widder (Calcus Vol I) .
Proof of a special case:
Suppose that X and Y are random varaibles both taking only possible values in {$0, 1, 2,\dots, n$}.
Further, suppose that X and Y have the same mgf for all t:
$$\sum_{x=0}^ne^{tx}f_X(x)=\sum_{y=0}^ne^{ty}f_Y(y)$$
For simplicity, we will let $s = e^t$
and we will define $c_i = f_X(i) − f_Y (i)$ for $i = 0, 1,\dots,n$.
Now
$$\sum_{x=0}^ne^{tx}f_X(x)-\sum_{y=0}^ne^{ty}f_Y(y)=0$$
$$\Rightarrow \sum_{x=0}^ns^xf_X(x)-\sum_{y=0}^ns^yf_Y(y)=0$$
$$\Rightarrow \sum_{x=0}^ns^xf_X(x)-\sum_{x=0}^ns^xf_Y(x)=0$$
$$\Rightarrow\sum_{x=0}^ns^x[f_X(x)-f_Y(x)]=0$$
$$\Rightarrow \sum_{x=0}^ns^xc_x=0~∀s>0$$
The above is simply a polynomial in s with coefficients $c_0, c_1,\dots,c_n$. The only way it can be zero for all values of s is if $c_0=c_1=\cdots= c_n=0$.So, we have that $0=c_i=f_X(i)−f_Y(i)$ for $i=0, 1,\dots,n$.
Therefore, $f_X(i)=f_Y(i)$ for $i=0,1,\dots,n$.
In other words the density functions for $X$ and $Y$ are exactly the same. In other other words, $X$ and $Y$ have the same distributions.
Best Answer
You may also see it this way: consider another Bernoulli RV $\Theta$ which is $0$ with probability $p$ and $1$ with probability $1-p$ (so $P(\Theta = 0) = p, P(\Theta = 1) = 1-p$). This variable selects either $X$ or $Y$ with the given probability $p$ and $1-p$. Then the PMF of $Z$ is $$P(Z = k) = P(X = k, \Theta = 0) + P(Y = k, \Theta = 1)$$ $$ = P(X=k)P(\Theta = 0) + P(Y = k)P(\Theta = 1)$$ due to the independence of $\Theta$ from $X,Y$ so $$P(Z = k) = p P(X = k) + (1-p) P(Y = k)$$ The MGF is given by $$\mathbf{E}[e^{s Z}] = \sum_{k \in \chi} e^{s k} P(Z = k) = \sum_{k} p e^{s k} P(X = k) + (1-p)e^{s k} P(Y = k)$$ where $\chi$ is the set of possible values of $Z$, so $$\mathbf{E}[e^{s Z}] = p \mathbf{E}[e^{s X}] + (1-p) \mathbf{E}[e^{s Y}]$$