In the case of a power law, $ P(x; \alpha, x_{min}) = \frac{\alpha - 1}{x_{min}} \left( \frac{x}{x_{min}} \right)^{-\alpha}$, the maximum likelihood estimator (MLE) for $\alpha$ is indeed simple if given the value for $x_{min}$, namely $\hat{\alpha} = 1 + n \cdot \left( \sum_{i=1}^n \ln{(x_i/x_{min})}\right)^{-1}$. However, there is no simple expression for estimating $x_{min}$ as the likelihood is increasing in $x_{min}$, corresponding to throwing out more and more of the data, so another method is needed (Clauset et al. maximize the similarity between the observed data above $x_{min}$ and the fitted distribution by using KS statistic).
In the case of a power law with an exponential cut-off, $P(x; \alpha, \lambda, x_{min}) = \frac{\lambda^{1-\alpha}}{\Gamma(1-\alpha,\lambda x_{min})} x^{-\alpha} e^{-\lambda x}$, finding exact expressions is much harder (the derivatives of the log-likelihood involves, among other things, a Meijer G-function and a closed form for the solution seems unlikely). The estimators of $\lambda$ and $\alpha$ are coupled (due to the normalization constant) so mikitov's idea of finding them sequentially does not work, unfortunately.
We therefore have to use numerical methods. The log-likelihood is $\mathcal{L}/n = (1-\alpha)\ln{\lambda} - \ln{\Gamma(1-\alpha,x_{min}\lambda)} - \alpha\sum_{i=1}^n\ln{x_i} - \lambda\sum_{i=1}^nx_i$. Mathematica's NMaximize seems to do a fairly good job of finding the MLEs:
Clear[\[Lambda], \[Alpha], xmin] NMaximize[{Length[xs] Log[\[Lambda]^(1 - \[Alpha])/Re@Gamma[1 - \[Alpha], xmin \[Lambda]]] - \[Alpha] Total[Log[xs]] - \[Lambda] Total[xs],\[Alpha] >= 1, \[Alpha] <= 3, \[Lambda] >= 0, xmin > 0, xmin <= Min[xs]}, {\[Alpha], \[Lambda], k}]
where xs is data with $x_{min} = \min x_i$. This would have to be combined with a KS statistic maximization for $x_{min}$ similar to that for the regular power laws.
Best Answer
The answer is on the Wikipedia page. There is not a closed form solution so you have to use an iterative method like the one they have provided.