It's just linear algebra. Let's take a concrete example where $n=3$. Then ${\bf \mu} \in \mathbb{R}^{3 \times 1}$.
$$\Sigma_{MLE} = \dfrac{1}{3}\sum_{i=1}^3(x_i-{\bf \mu})(x_i-{\bf \mu})^T = \frac13 \left( \begin{array}{c} x_{1} - \mu \\ x_{2} - \mu \\ x_{3} - \mu\end{array} \right)\left( \begin{array}{ccc} x_{1} - \mu & x_{2} - \mu & x_{3} - \mu\end{array} \right) $$
$$ = \frac13 \left( \begin{array}{ccc} (x_{1} - \mu)^2 & (x_{1} - \mu)(x_{2} - \mu) & (x_{1} - \mu)(x_{3} - \mu) \\ (x_{2} - \mu)(x_{1} - \mu) & (x_{2} - \mu)^2 & (x_{2} - \mu)(x_{3} - \mu) \\ (x_{3} - \mu)(x_{1} - \mu) & (x_{3} - \mu)(x_{2} - \mu) & (x_{3} - \mu)^2 \end{array} \right) $$
$$ = \left( \begin{array}{ccc} Var(x_1) & Cov(x_1,x_2) & Cov(x_1,x_3) \\ Cov(x_2,x_1) & Var(x_2) & Cov(x_2,x_3)\\ Cov(x_3,x_1) & Cov(x_3,x_2) & Var(x_3) \end{array} \right) $$
Also note that $Var(\sigma_{11})$ is odd notation. You might be thinking $\sigma^2_{x_1}$ or $Var(x_1)$.
The full derivation of the MLEs for IID data from an inverse Gaussian distribution can be found in the answer to this related question. In your case you have added an additional layer of complication by having observable data values $t_i = u_i - x_i - \tau$ that depend on some conditioning covariates and an additional parameter. From this formulation, your sampling density is:
$$f(\mathbf{u} | \mathbf{x}, \tau, \mu, \lambda) = \prod_{i=1}^n \Big( \frac{\lambda}{2 \pi (u_i-x_i-\tau)^3} \Big)^{1/2} \exp \Big( - \sum_{i=1}^n \frac{\lambda (u_i-x_i-\tau - \mu)^2}{2 \mu^2 (u_i-x_i-\tau)} \Big)$$
over the support $\mathbf{u} \geqslant \mathbf{x} + \tau \mathbf{1}$. The log-likelihood function is defined over $\tau \leqslant \min (u_i-x_i)$ and is given over this range by:
$$\ell_{\mathbf{u},\mathbf{x}}(\tau, \mu, \lambda) = \text{const} + \frac{n}{2} \ln (\lambda) - \frac{3}{2} \sum_{i=1}^n \ln (u_i-x_i-\tau) - \frac{\lambda}{2 \mu^2 } \sum_{i=1}^n \frac{(u_i-x_i-\tau - \mu)^2}{(u_i-x_i-\tau)}.$$
Finding the MLE: To facilitate our analysis we define the functions:
$$H_k(\tau) \equiv \frac{1}{n} \sum_{i=1}^n (u_i-x_i-\tau)^k.$$
We then have:
$$\begin{equation} \begin{aligned}
\frac{\partial \ell_{\mathbf{u},\mathbf{x}}}{\partial \tau}(\tau, \mu, \lambda)
&= \frac{3}{2} \sum_{i=1}^n \frac{1}{u_i-x_i-\tau} + \frac{\lambda}{2 \mu^2 } \sum_{i=1}^n \frac{(u_i - x_i - \tau + \mu)(u_i-x_i-\tau - \mu)}{(u_i-x_i-\tau)^2} \\[10pt]
&= \frac{3}{2} \sum_{i=1}^n \frac{1}{u_i-x_i-\tau} + \frac{\lambda}{2 \mu^2 } \sum_{i=1}^n \frac{(u_i - x_i - \tau)^2 -2 \mu (u_i-x_i-\tau) + \mu^2}{(u_i-x_i-\tau)^2} \\[10pt]
&= \frac{3}{2} \sum_{i=1}^n \frac{1}{u_i-x_i-\tau} + \frac{\lambda}{2 \mu^2 } \Big[ n - 2\mu \sum_{i=1}^n \frac{1}{u_i-x_i-\tau} + \mu^2 \sum_{i=1}^n \frac{1}{(u_i-x_i-\tau)^2} \Big] \\[10pt]
&= \frac{3n}{2} H_{-1}(\tau) + \frac{n \lambda}{2 \mu^2 } \Big[ 1 - 2 \mu H_{-1}(\tau) + \mu^2 H_{-2}(\tau) \Big]. \\[10pt]
\end{aligned} \end{equation}$$
Taking $\tau$ to be fixed for the moment, the MLEs of the inverse Gaussian distribution are:
$$\hat{\mu}(\tau) = H_1(\tau) \quad \quad \quad \frac{1}{\hat{\lambda}(\tau)} = H_{-1}(\tau) - \frac{1}{H_1(\tau)}.$$
Substituting these functions yields:
$$\begin{equation} \begin{aligned}
\frac{\partial \ell_{\mathbf{u},\mathbf{x}}}{\partial \tau}(\tau, \hat{\mu}(\tau), \hat{\lambda}(\tau))
&= \frac{3n}{2} H_{-1}(\tau) + \frac{n}{2 H_1(\tau)^2 } \frac{1 - 2 H_1(\tau) H_{-1}(\tau) + H_1(\tau)^2 H_{-2}(\tau)}{H_{-1}(\tau) - H_1(\tau)^{-1}} \\[10pt]
&= \frac{n}{2} \cdot \frac{1}{H_1(\tau)^2} \Big[ 3 H_{-1}(\tau) H_1(\tau)^2 - \frac{2 H_1(\tau) H_{-1}(\tau) - H_1(\tau)^2 H_{-2}(\tau) - 1}{H_{-1}(\tau) - H_1(\tau)^{-1}} \Big]. \\[10pt]
\end{aligned} \end{equation}$$
Setting this partial derivative to zero yields the critical point equation:
$$1 + 3 H_{-1}(\tau)^2 H_1(\tau)^2 - 5 H_{-1}(\tau) H_1(\tau) + H_1(\tau)^2 H_{-2}(\tau) = 0.$$
This critical point equation will need to be solved numerically, as there is no simple expression for the solution.
Best Answer
It's quite easy. Just equate the equation 3 to zero and solve for mu. Have a try and lets see what you get.
Have a look at this https://en.m.wikipedia.org/wiki/Maximum_likelihood_estimation
Example part. I think you get some mistake in your equations.