$\newcommand{\si}{\sigma}\newcommand{\Si}{\Sigma}\newcommand{\ep}{\varepsilon}\newcommand{\vpi}{\varphi}\newcommand{\R}{\mathbb R}$In this "Gaussian" setting especially, it is convenient to approximate the delta function by the normal distribution $N(0,\ep^2)$ with $\ep\downarrow0$, so that
\begin{equation*}
R[f](w,b)=\lim_{\ep\downarrow0}R_\ep[f](w,b), \tag{1}\label{1}
\end{equation*}
where
\begin{equation*}
R_\ep[f](w,b):=\int_{\R^n}dx\,\vpi_\ep(w^\top x-b)f(x),
\end{equation*}
\begin{equation*}
\vpi_\ep(t):=\frac1\ep
\vpi\Big(\frac t\ep\Big),
\end{equation*}
and $\vpi$ is the standard normal density.
Now, for $h(x)\equiv x_1 g(x)$ and $g$ the density of $N(\mu,\Si)$, we can write
\begin{equation*}
R_\ep[h](w,b)
=\int_{\R^n}dx\,g(x) x_1 \vpi_\ep(w^\top x-b)
=E e_1^\top X\, \vpi_\ep(w^\top X-b),
\end{equation*}
where $e_1:=[1,0,\dots,0]^\top\in\R^{n\times1}=\R^n$ and $X\sim N(\mu,\Si)$ .
Note that the joint distribution of $e_1^\top X$ and $w^\top X$ is bivariate normal with respective means
\begin{equation*}
\mu_1:=e_1^\top\mu\quad\text{and}\quad\mu_2:=w^\top\mu, \tag{2}\label{2}
\end{equation*}
respective standard deviations
\begin{equation*}
\si_1:=\sqrt{e_1^\top\Si e_1} \quad\text{and}\quad \si_2:=\sqrt{w^\top\Si w}, \tag{3}\label{3}
\end{equation*}
and correlation
\begin{equation*}
\rho:=\frac{e_1^\top\Si w}{\si_1\si_2}. \tag{4}\label{4}
\end{equation*}
So, straightforward calculations yield
\begin{equation*}
R_\ep[h](w,b)
=
\frac{ \rho \si _1 \si _2 (b-\mu _2)+\mu _1 (\si
_2^2+\ep ^2)}{\sqrt{2 \pi } (\si _2^2+\ep ^2){}^{3/2}}\,
\exp\Big\{-\frac{(b-\mu _2){}^2}{2 (\si _2^2+\ep ^2)}\Big\}.
\end{equation*}
Finally, by \eqref{1},
\begin{equation*}
R[h](w,b)
=
\frac{\rho\si_1(b-\mu_2)+\mu_1\si_2}{\sqrt{2\pi}\,\si _2^2}\,
\exp\Big\{-\frac{(b-\mu_2)^2}{2 \si _2^2}\Big\},
\end{equation*}
with $\mu_1,\mu_2,\si_1,\si_2,\rho$ given by \eqref{2}--\eqref{4}.
Best Answer
$\newcommand{\Si}{\Sigma}\newcommand{\R}{\mathbb R}$First, one should not denote a random vector in $\R^n$ (which is not actually a vector in $\R^n$ but a function with values in $\R^n$) and a true, non-random vector in $\R^n$ by the same symbol, such as $\mathbf x$.
Accordingly, let $X\sim N(0,S)$, where $S:=H^{-1}$. Let $c:=C$ and $g:=\mathbf g\ne0$. We want to find the conditional density of $X$ given $X^\top g=c$; as usual, we identify $\R^n$ with the set $\R^{n\times1}$ of all $n\times1$ real matrices. Without loss of generality, $g$ is a unit vector; otherwise, replace $g$ and $c$ by $g/|g|$ and $c/|g|$, respectively, where $|g|$ is the Euclidean norm of $g$. Let $g_1,\dots,g_n$ be any orthonormal vectors in $\R^n$ such as $g_n=g$.
Let $Y_1,\dots,Y_n$ be the coordinates of the random vector $X$ in the orthonormal basis $(g_1,\dots,g_n)$, so that $Y_k=g_k^\top X$ and $X=\sum_{k=1}^n Y_kg_k$. Let $Y:=[Y_1,\dots,Y_n]^\top$, so that $Y=G^\top X$ and $X=GY$, where $G:=[g_1,\dots,g_n]$, the $n\times n$ matrix with columns $g_1,\dots,g_n$. The covariance matrix of $Y$ is \begin{equation} EYY^\top=G^\top SG=: \begin{bmatrix} \Si_{11}&\Si_{12} \\ \Si_{21}&\Si_{22} \end{bmatrix}, \end{equation} where $\Si_{12}$ is the $(n-1)\times1$ matrix that is the covariance matrix of $[Y_1,\dots,Y_{n-1}]^\top$ and $Y_n$. Also, $Y\sim N(0,G^\top SG)$. So, the conditional pdf of $[Y_1,\dots,Y_{n-1}]^\top$ given $Y_n=c$ is the pdf, say $p_c$, of the $(n-1)$-dimensional normal distribution \begin{equation} N(\Si_{12}\Si_{22}^{-1}c,\Si_{11}-\Si_{12}\Si_{22}^{-1}\Si_{21}). \end{equation}
This conditional pdf, $p_c$, can be regarded as the desired conditional density of $X$ given $X^\top g=c$. Indeed, $p_c$ can be used as follows: for any (say) nonnegative Borel function $f\colon\R^n\to\R$, \begin{equation} E(f(X)|X^\top g=c)=E(f(GY)|Y_n=c) \\ =\int_{\R^{n-1}}f(G[y_1,\dots,y_{n-1},c]^\top)p_c(y_1,\dots,y_{n-1})\, dy_1\cdots dy_{n-1}. \end{equation} In particular, for any Borel subset $B$ of $\R^n$, \begin{equation} P(X\in B|X^\top g=c)=P(GY\in B|Y_n=c) \\ =\int_{\R^{n-1}}1(G[y_1,\dots,y_{n-1},c]^\top\in B)p_c(y_1,\dots,y_{n-1})\, dy_1\cdots dy_{n-1}. \end{equation}