The general proof of this can be found in Feller (An Introduction to Probability Theory and Its Applications, Vol. 2). It is an inversion problem involving Laplace transform theory. Did you notice that the mgf bears a striking resemblance to the Laplace transform?. For use of Laplace Transformation you can see Widder (Calcus Vol I) .
Proof of a special case:
Suppose that X and Y are random varaibles both taking only possible values in {$0, 1, 2,\dots, n$}.
Further, suppose that X and Y have the same mgf for all t:
$$\sum_{x=0}^ne^{tx}f_X(x)=\sum_{y=0}^ne^{ty}f_Y(y)$$
For simplicity, we will let $s = e^t$
and we will define $c_i = f_X(i) − f_Y (i)$ for $i = 0, 1,\dots,n$.
Now
$$\sum_{x=0}^ne^{tx}f_X(x)-\sum_{y=0}^ne^{ty}f_Y(y)=0$$
$$\Rightarrow \sum_{x=0}^ns^xf_X(x)-\sum_{y=0}^ns^yf_Y(y)=0$$
$$\Rightarrow \sum_{x=0}^ns^xf_X(x)-\sum_{x=0}^ns^xf_Y(x)=0$$
$$\Rightarrow\sum_{x=0}^ns^x[f_X(x)-f_Y(x)]=0$$
$$\Rightarrow \sum_{x=0}^ns^xc_x=0~∀s>0$$
The above is simply a polynomial in s with coefficients $c_0, c_1,\dots,c_n$. The only way it can be zero for all values of s is if $c_0=c_1=\cdots= c_n=0$.So, we have that $0=c_i=f_X(i)−f_Y(i)$ for $i=0, 1,\dots,n$.
Therefore, $f_X(i)=f_Y(i)$ for $i=0,1,\dots,n$.
In other words the density functions for $X$ and $Y$ are exactly the same. In other other words, $X$ and $Y$ have the same distributions.
Best Answer
If the random variable $X$ has pdf $f(x)$, $x\in \mathbb{R}$, then the moment generating function $M_X(t)$, provided it exists, is the two-sided Laplace transform of $X$. If $\mathcal{L}\{f(x)\}(t)$ is the (one-sided) Laplace Transform of $f(x)$, then
$$ M_X(-t)=\mathcal{L}\{f(x)\}(t)+\mathcal{L}\{f(-x)\}(-t) $$
and $M_X(t)$ is unique iff both $\mathcal{L}\{f(x)\}(t)$ and $\mathcal{L}\{f(-x)\}(-t)$ are unique. But we already know that the Laplace transform of a function is unique, provided it exists.