Let $Y = \lVert X \rVert^2$. Then $Y/\sigma^2 \sim \chi^2_p$. And you wish to find $EY^{-1/2}$. This is
$$ \frac{1}{\sigma}\cdot\dfrac{1}{2^{p/2}\Gamma(p/2)}\int_0^\infty y^{-1/2} y^{p/2 - 1} e^{-y/2}dy$$
$$ = \frac{1}{\sigma} \dfrac{2^{(p-1)/2}\Gamma((p-1)/2)}{2^{p/2}\Gamma(p/2)} = \frac{1}{\sigma\sqrt{2}}\dfrac{\Gamma((p-1)/2)}{\Gamma(p/2)}$$
Let $X\sim N(\mu, \sigma)$. "Rectified" Gaussian is then $Y = \max(0, X)$. For both the expectation and the variance use the law of total expectation, as suggested by Michael. I'll also make use of the first two moments of the truncated normal distribution, which are readily available from Wikipedia.
\begin{align}
\mathbb{E}Y &= \mathbb{E}[X|X>0]\mathbb{P}(X>0) + 0\times\mathbb{P}(X\leq0)
\end{align}
Notice that the random variable $X|X>0$ is a truncated normal (with parameters $\mu, \sigma, 0, \infty$) which has a mean $\mu + \sigma \frac{\phi(-\mu/\sigma)}{1-\Phi(-\mu/\sigma)}$. Hence, as already established:
$$
\mathbb{E}Y = \mu \left(1-\Phi\left(-\frac{\mu}{\sigma}\right)\right) + \sigma \phi\left(-\frac{\mu}{\sigma}\right)
$$
For the variance
\begin{align}
Var(Y) &= \mathbb{E}[Y^2] - (\mathbb{E}Y)^2 \\
&= \mathbb{E}[X^2|X>0]\mathbb{P}(X>0) + 0\times\mathbb{P}(X\leq0)- (\mathbb{E}Y)^2
\end{align}
The only thing needed to calculate the above which we don't have is the second moment of the truncated normal, $\mathbb{E}[X^2|X>0]$, which can be obtained from its mean and the variance:
\begin{align}
\mathbb{E}[X^2|X>0] &= \sigma^2\left( 1+ \frac{-\frac{\mu}{\sigma} \phi(-\frac{\mu}{\sigma})}{1-\Phi\left(-\frac{\mu}{\sigma}\right)} - \frac{\phi\left(-\frac{\mu}{\sigma} \right)^2}{\left(1-\Phi\left(-\frac{\mu}{\sigma}\right)\right)^2}\right) +
\left(\mu + \sigma \frac{\phi\left(-\frac{\mu}{\sigma}\right)}{1-\Phi\left(-\frac{\mu}{\sigma}\right)} \right)^2 \\
& = \sigma^2 +\mu^2 +\mu\sigma \frac{\phi\left(-\frac{\mu}{\sigma}\right)}{1-\Phi\left(-\frac{\mu}{\sigma}\right)}
\end{align}
You can expand $(\mathbb{E}Y)^2$ and try to simplify a bit further:
\begin{align}
Var(Y) &= (\sigma^2 + \mu^2)\left(1-\Phi\left(-\frac{\mu}{\sigma}\right)\right) + \mu\sigma \phi\left(-\frac{\mu}{\sigma}\right) - \left(\mu \left(1-\Phi\left(-\frac{\mu}{\sigma}\right)\right) + \sigma \phi\left(-\frac{\mu}{\sigma}\right) \right)^2 \\
&= \mu^2\Phi\left(-\frac{\mu}{\sigma}\right)\left(1-\Phi\left(-\frac{\mu}{\sigma}\right)\right) + \mu\sigma\phi\left(-\frac{\mu}{\sigma}\right)\left(2\Phi\left(-\frac{\mu}{\sigma}\right)-1 \right) + \sigma^2\left(1-\Phi\left(-\frac{\mu}{\sigma}\right) - \phi\left(-\frac{\mu}{\sigma}\right)^2\right)
\end{align}
Best Answer
I will use the notation $a^\top b$ instead of $\langle a, b\rangle$.
If $S \sim N(\mu, \Sigma)$ then $U := w^\top S \sim N(w^\top \mu, w^\top S w)$.
Your integral is $$E[\Phi(w^\top S)] = E[\Phi(U)].$$
Let $Z \sim N(0, 1)$ be independent of $U$, and note that $\Phi(u) = P(Z \le u) = P(Z \le U \mid U = u)$.
Then $$E[\Phi(U)] = E[P(Z \le U \mid U)] = P(Z \le U).$$
Finally, note that $Z-U \sim N(-w^\top \mu, w^\top S w + 1)$, so $$P(Z \le U) = P(Z-U \le 0) = \Phi\left(\frac{w^\top \mu}{\sqrt{w^\top S w + 1}}\right)$$
Note that the above argument is an adaptation of an answer that was mentioned in the comments of an answer to the question you linked.