You would need that constraint if you were working with a multinomial distribution. Multinomial distribution are used when you have K possible outcomes. They are coded in a 1-of-K fashion, that is, a vector such that only one of its component is non-zero (usually 1). In that case,
$$
p(x|\mu) = \prod_{k=1}^{K}\mu_{k}^{x_{k}}
$$
which implies that,
$$
\sum_{x}p(x|\mu) = \sum_{k}\mu_{k} = 1
$$
Now, you have a vector ,$x$, such that each component follows a Bernoulli distribution, that is, they are independent from each other. There you code whether a given feature is present or not.
Is it true that up to some constant and scalar multiplication:
$\lim_{\sigma \to 0} Q((\pi, \mu, \Sigma), (\pi, \mu, \Sigma)^{\text{old}}) = -J$?
This is not the case since – as you observed yourself – the limit diverges.
However, if we first transform $Q$ and then take the limit, we converge to the k-means objective.
For $\Sigma_k = \sigma^2 I$ and $\pi_k = 1/K$ we have
\begin{align}
Q
&= \sum_{n,k} \gamma_{nk} \left( \log \pi_k + \log N(x_n \mid \mu_k, \Sigma_k) \right) \\
&= N \log\frac{1}{K} - \frac{1}{\sigma^2} \sum_{n,k} \gamma_{nk} ||x_n - \mu_k||^2 - N \frac{D}{2} \log 2\pi\sigma^2.
\end{align}
Multiplying by $\sigma^2$ (which does not affect the EM algorithm, since $\sigma$ is not optimized but constant) and collecting all the constant terms in $C$, we see that
\begin{align}
Q &\propto - \sum_{n,k} \gamma_{nk} ||x_n - \mu_k||^2 + \sigma^2 C.
\end{align}
Note that maximizing this function with respect to $\mu$ for any $\gamma$ and $\sigma$ gives the same result as the objective function above, i.e., it is an equivalent formulation of the M-step. But taking the limit now yields $-J$.
As an aside, an in my view slightly more elegant formulation of EM is to use the objective function
\begin{align}
F(\mu, \gamma)
&= \sum_{n,k} \gamma_{nk} \log \pi_k N(x_n \mid \mu_k, \Sigma_k)/\gamma_{nk} \\
&\propto -\sum_{n,k} \sum_{n, k} \gamma_{nk} ||x_n - \mu_k||^2 - \sigma^2 \sum_{n,k} \gamma_{nk} \log \gamma_{nk} + \sigma^2 C.
\end{align}
Using this objective function, the EM algorithm amounts to alternating between optimizing $F$ with respect to $\mu$ (M-step) and $\gamma$ (E-step). Taking the limit we see that both the M-step and the E-step converge to the k-means algorithm.
See also an alternative view of EM.
Best Answer
This answer and this tutorial give the derivatives of log multivariate Gaussians, the rest should be easy.
$$Q=\sum_n\sum_kq_{kn}[\log\pi_k-\frac{k}{2}\log 2\pi -\frac{1}{2}\log\mid\Sigma_k\mid -\frac{1}{2}(x_n-\mu_k)^T\Sigma_k^{-1}(x_n-\mu_k)]$$
$$\frac{\partial Q}{\partial\mu_k}=\sum_nq_{kn} \frac{-\frac{1}{2}\partial (x_n-\mu_k)^T\Sigma_k^{-1}(x_n-\mu_k)}{\partial\mu_k}=\sum_nq_{kn}\Sigma_k^{-1}(x_n-\mu_k)$$ $$\frac{\partial Q}{\partial\Sigma_k}=\sum_nq_{kn} \frac{-\frac{1}{2}\partial\log\mid\Sigma_k\mid -\frac{1}{2}\partial(x_n-\mu_k)^T\Sigma_k^{-1}(x_n-\mu_k)}{\partial\Sigma_k}$$$$=-\frac{1}{2}\sum_nq_{kn}(\Sigma_k^{-1}-\Sigma_k^{-1}(x_n-\mu_k)(x_n-\mu_k)^T\Sigma_k^{-1})$$
Setting the derivatives to zero we can get the desired results. $$-\frac{1}{2}\sum_nq_{kn}(\Sigma_k^{-1}-\Sigma_k^{-1}(x_n-\mu_k)(x_n-\mu_k)^T\Sigma_k^{-1})=0$$ $$-\frac{1}{2}\sum_nq_{kn}\Sigma_k(\Sigma_k^{-1}-\Sigma_k^{-1}(x_n-\mu_k)(x_n-\mu_k)^T\Sigma_k^{-1})\Sigma_k=0$$ $$-\frac{1}{2}\sum_nq_{kn}(\Sigma_k-(x_n-\mu_k)(x_n-\mu_k)^T)=0$$ $$\sum_nq_{kn}\Sigma_k-\sum_nq_{kn}(x_n-\mu_k)(x_n-\mu_k)^T=0$$ $$\Sigma_k=\frac{\sum_nq_{kn}(x_n-\mu_k)(x_n-\mu_k)^T}{\sum_nq_{kn}}$$