By the chain rule of entropy, it holds
$$
\tag{1}
h(\mathbf{x},\mathbf{r})=h(\mathbf{r})+h(\mathbf{x}\mid \mathbf{r})
$$
as well as
$$
\tag{2}
h(\mathbf{x},\mathbf{r})=h(\mathbf{x})+h(\mathbf{r}\mid \mathbf{x}).
$$
Now, since $\mathbf{r}$ is a deterministic function of $\mathbf{x}$, it holds
$$
h(\mathbf{r}\mid \mathbf{x})=0,
$$
which, after substituting in (2) gives
$$
h(\mathbf{x},\mathbf{r}) = h(\mathbf{x}).
$$
Substituting the last result in (1) leads to
$$
\begin{align}
h(\mathbf{x}\mid \mathbf{r}) &= h(\mathbf{x})-h(\mathbf{r})\\
&=\frac{1}{2}\log\left((2\pi e)^N \det(\mathbf{\Sigma})\right)-\left(-\sum_\mathbf{r}p(\mathbf{r})\log(p(\mathbf{r}))\right).
\end{align}
$$
I am not aware of any closed-form formula for $p(\mathbf{r})$. According to this answer, it does not have one. Therefore, you would have in principle to enumerate all the possible ($2^N$) vectors $\mathbf{r}$ and numerically compute the pmf $p(\mathbf{r})$. My guess is that this procedure will become impractical for moderate $N$.
Remark: The above formula is the same as yours. Indeed,
$$
\begin{align}
-\sum_{i\in \mathcal{R}} \int_{I_i} f(\mathbf{x})\log\frac{f(\mathbf{x})}{p(\mathbf{r}_i)}d\mathbf{x}&=-\sum_{i\in \mathcal{R}} \int_{I_i} f(\mathbf{x})\log f(\mathbf{x})d\mathbf{x}- \left(-\sum_{i\in \mathcal{R}} \log p(\mathbf{r}_i)\int_{I_i} f(\mathbf{x}) d\mathbf{x}\right)\\
&=-\int_{\mathbb{R}^N} f(\mathbf{x})\log f(\mathbf{x})d\mathbf{x}- \left(-\sum_{i\in \mathcal{R}} p(\mathbf{r}_i) \log p(\mathbf{r}_i)\right),
\end{align}
$$
since $\cup_{i\in\mathcal{R}}I_i=\mathbb{R}^N$, $I_i \cap I_j = \emptyset, \forall i\neq j$, and $\int_{I_i} f(\mathbf{x}) d\mathbf{x} = p(\mathbf{r}_i)$.
First, some properties of $\Gamma(q)$. It is a probability distribution so
$$
\int_{-\infty}^\infty \Gamma(q) \,dq = 1
$$
We can without loss of generality assume we have shifted the variable $q$ such that the mean is zero, so that
$$
\int_{-\infty}^\infty q\, \Gamma(q)\, dq = 0
$$
We can rescale $q$ in a linear way so that the "fixed variance" is one.
$$
\int_{-\infty}^\infty q^2\, \Gamma(q)\, dq = 1
$$
Subject to those three constraints we wish to find an extremum of
$$\int_{-\infty}^\infty \Gamma(q)\, \log(\Gamma(q))\,dq $$
We re-write the fixed variance constraint in such a way that we can ignore the first constraint until the end of the problem, and normalize the resulting function at the end. Namely,
$$
\int_{-\infty}^\infty q^2\, \Gamma(q)\, dq =
\int_{-\infty}^\infty \Gamma(q) \,dq
$$
So we have Lagrange multipliers $\lambda_1, \lambda_2$ associated with the mean and variance, and out Euler equation is
$$
\frac\partial{d\Gamma} \left( \Gamma(q)\, \log(\Gamma(q)) \right)+\lambda_1 \frac\partial{d\Gamma}\left( q\, \Gamma(q)\right)+\lambda_2 \frac\partial{d\Gamma}\left( q^2\, \Gamma(q) -\Gamma(q)\right) =0
$$
or
$$
1+\log(\Gamma(q)) + \lambda_1q + \lambda_2 (q^2-1) = 0
$$
which says that any extremum must be of the form
$$ \Gamma(q) = e^{\lambda_2 q^2+ (\lambda_1-\lambda_2)q + (1-\lambda_2)}
$$
and now we have to choose $\lambda_1$ and $\lambda_2$ such that the mean and variance will come out right (which is something like $\lambda_1 = \lambda_2 = -\sigma^2/2)$ and finally normalize the function because we had punted the constraint that the probability integrates to $1$.
Best Answer
It's better to simplify the term $\mathbb{E}[(x-\mu)^T \Sigma^{-1}(x-\mu)]$ directly:
$$ \begin{align} \mathbb{E}[(x-\mu)^T \Sigma^{-1}(x-\mu)] &= \mathbb{E}[\mathrm{tr}((x-\mu)^T \Sigma^{-1}(x-\mu))]\\ &= \mathbb{E}[\mathrm{tr}(\Sigma^{-1}(x-\mu)(x-\mu)^T)]\\ &= \mathrm{tr}(\mathbb{E}[\Sigma^{-1}(x-\mu)(x-\mu)^T])\\ &= \mathrm{tr}(\Sigma^{-1}\mathbb{E}[(x-\mu)(x-\mu)^T])\\ &= \mathrm{tr}(\Sigma^{-1}\Sigma)\\ &= \mathrm{tr}(I)=D \end{align} $$