Since you are a tutor, any knowledge is always for a good cause. So I will provide some bounds for the MLE.
We have arrived at
$$(1-\lambda x_{(n)})e^{\lambda x_{(n)} } + \lambda n x_{(n)} - 1 = 0$$
with $x_{(n)}\equiv M_n$. So
$$(1-\hat \lambda x_{(n)})e^{\hat \lambda x_{(n)}} = 1-\hat \lambda x_{(n)}n $$
Assume first that $1-\hat \lambda x_{(n)} >0$. Then we must also have $1-\hat \lambda x_{(n)}n>0$ since the exponential is always positive. Moreover since $x_{(n)}, \hat \lambda > 0\Rightarrow e^{\hat \lambda x_{(n)}}>1$. Therefore we should have
$$\frac {1-\hat \lambda x_{(n)}n}{1-\hat \lambda x_{(n)}}>1 \Rightarrow \hat \lambda x_{(n)}>\hat \lambda x_{(n)}n$$
which is impossible. Therefore we conclude that
$$\hat \lambda >\frac 1{x_{(n)}},\;\; \hat \lambda = \frac c{x_{(n)}}, \;\; c>1$$
Inserting into the log-likelihood we get
$$\ell(\hat\lambda(c)\mid x_{(n)}) = \log \frac c{x_{(n)}} + \log n - \frac c{x_{(n)}} x_{(n)} + (n-1) \log (1 - e^{-\frac c{x_{(n)}} x_{(n)}})$$
$$= \log \frac n{x_{(n)}} + \log c - c + (n-1) \log (1 - e^{-c})$$
We want to maximize this likelihood with respect to $c$. Its 1st derivative is
$$\frac{d\ell}{dc}=\frac 1c -1 +(n-1)\frac 1{e^{c}-1}$$
Setting this equal to zero, we require that
$$e^{c}-1 - c\left(e^{c}-1\right)+(n-1)c =0$$
$$\Rightarrow \left(n-e^c\right)c = 1-e^c$$
Since $c>1$ the RHS is negative. Therefore we must also have $n-e^c <0 \Rightarrow c > \ln n$. For $n\ge 3$ this provides a tighter lower bound for the MLE, but it doesn't cover the $n=2$ case, so
$$\hat \lambda > \max \left\{\frac 1{x_{(n)}}, \frac {\ln n}{x_{(n)}}\right\}$$
Moreover (for $n\ge 3$) rearranging the 1st-order condition we have that
$$c= \frac{e^c-1}{e^c-n} > \ln n \Rightarrow e^c -1 > e^c\ln n -n\ln n $$
$$\Rightarrow n\ln n-1>e^c(\ln n -1) \Rightarrow c< \ln{\left[\frac{n\ln n-1}{\ln n -1}\right]}$$
So for $n\ge 3$ we have that
$$\frac 1{x_{(n)}}\ln n < \hat \lambda < \frac 1{x_{(n)}}\ln{\left[\frac{n\ln n-1}{\ln n -1}\right]}$$
This is a narrow interval, especially if $x_{(n)}\ge 1$. For example (truncated at 3d digit )
$$\begin{align}
n=10 & &\frac 1{x_{(n)}}2.302 < \hat \lambda < \frac 1{x_{(n)}}2.827\\
n=100 & & \frac 1{x_{(n)}}4.605 < \hat \lambda < \frac 1{x_{(n)}}4.847\\
n=1000 & & \frac 1{x_{(n)}}6.907 < \hat \lambda < \frac 1{x_{(n)}}7.063\\
n=10000 & & \frac 1{x_{(n)}}9.210< \hat \lambda < \frac 1{x_{(n)}}9.325\\
\end{align}$$
Numerical examples indicate that the MLE tends to be equal to the upper bound, up to second decimal digit.
ADDENDUM: A CLOSED FORM EXPRESSION
This is just an approximate solution (it only approximately maximizes the likelihood), but here it is:
manipulating the 1st-order condition we want to have
$$\lambda = \frac 1{x_{(n)}}\ln \left[\frac {\lambda x_{(n)}n -1}{\lambda x_{(n)} -1}\right]$$
Now, one can show (see for example here) that
$$E[X_{(n)}] = \frac {H_n}{\lambda},\;\; H_n = \sum_{k=1}^n\frac 1k$$
Solving for $\lambda$ and inserting into the RHS of the implicit 1st-order condition, we obtain
$$\lambda = \frac 1{x_{(n)}}\ln \left[\frac {nH_n\frac {x_{(n)}}{E[X_{(n)}]} -1}{ H_n\frac {x_{(n)}}{E[X_{(n)}]} -1}\right]$$
We want an estimate of $\lambda$, given that $X_{(n)}=x_{(n)}$, $\hat \lambda \mid \{X_{(n)}=x_{(n)}\}$. But in such a case, we also have $E[X_{(n)}\mid \{X_{(n)}=x_{(n)}\}] =x_{(n)}$. this simplifies the expression and we obtain
$$\hat \lambda = \frac 1{x_{(n)}}\ln \left[\frac {nH_n -1}{ H_n -1}\right]$$
One can verify that this closed form expression stays close to the upper bound derived previously, but a bit less than the actual (numerically obtained) MLE.
Best Answer
The answer referenced in the comments is great, because it is based on straightforward probabilistic thinking. But it is possible to obtain the answer through elementary means, beginning from definitions.
Because $x_{(n)}$ is the largest of $n$ independent variables, the event $x_{(n)}\le x$ is the event that all the $x_i \le x.$ Stipulating the $x_i$ have Exponential$(1)$ distributions says that for $x\gt 0,$ these have common probability $1 - e^{-x}$ (and otherwise have zero probability).
Since probabilities of independent events multiply,
$$\Pr(x_{(n)} \le x) = \left(1 - e^{-x}\right)^n.$$
One well-known formula for the expectation of a positive random variable with distribution function $F$ is the integral of $1-F$ from $0$ to $\infty.$ (Take the usual integral for the expectation and integrate by parts.) We are looking, then, to compute
$$E_n = E\left[x_{(n)}\right] = \int_0^\infty 1 - \left(1 - e^{-x}\right)^n\,\mathrm{d}x$$
for $n=1, 2, 3, \ldots.$
That initial "$1$" in the integrand is thorny, because its integral diverges, so we cannot separate it out. However, the differences between these quantities are considerably simpler to compute because the $1$'s cancel:
$$E_{n} - E_{n-1} = \int_0^\infty 1 - \left(1 - e^{-x}\right)^n - \left[1 - \left(1 - e^{-x}\right)^{n-1}\right]\,\mathrm{d}x = \int_0^\infty \left(1 - e^{-x}\right)^{n-1}e^{-x}\,\mathrm{d}x.$$
This is a textbook case for integration by substitution: the natural form to try is $u = 1-e^{-x},$ reducing the integral to
$$E_{n} - E_{n-1} = -\int_1^0 u^{n-1}\,\mathrm{d}u = \frac{1}{n}.$$
Beginning with $E_0=\int (1-1)\mathrm{d}x = 0,$ we obtain recursively
$$\begin{aligned}E_n &= E_0 + (E_1 - E_0) + (E_2 - E_1) + \cdots + (E_n - E_{n-1}) \\&= 0 + \frac{1}{1} + \frac{1}{2} + \cdots + \frac{1}{n} = H(n), \end{aligned}$$
the $n^\text{th}$ harmonic number.