Given a uniform prior and (independent) observations from a Normal distribution then the resulting posterior is a truncated normal distribution. However, in this case the observations are drawn from a truncated prior which makes it more complicated.
First, you can 'ignore' the integral in the denominator since this is just a constant assuring that the posterior is a density. In general
$$p(\mu | x) \propto p(x|\mu)p(\mu).$$
As you have derived (note that $1/\sigma$ is a constant and is not considered):
$$p(\mu|x) \propto \frac{\phi\left(\frac{x-\mu}{\sigma}\right)}{\Phi\left(\frac{1-\mu}{\sigma}\right) - \Phi\left(\frac{-\mu}{\sigma}\right)}I_{\mu \in [0,1]}.$$
At first glance it looks like it is a truncated normal again, however $\mu$ is now variable instead of $x$, so comparing with the truncated normal density, this is no longer the case.
Here is what I found using Mathematica. First, the constant of integration is found for the truncated bivariate distribution, then the first two moments are found followed by the variance. Then the variance is simplified such that we end up with the usual notation.
Here are the results:
$$E(X|Y<y)=\mu_X-\frac{\rho \sigma_X \phi \left(\frac{y-\mu_Y}{\sigma_Y}\right)}{\Phi \left(\frac{y-\mu_Y}{\sigma_Y}\right)}$$
$$V(X|Y<y)=\sigma_X^2 \left(\frac{\rho ^2 \phi \left(\frac{y-\mu_Y}{\sigma_Y}\right) \left((\mu_Y-y) \Phi \left(\frac{y-\mu_Y}{\sigma_Y}\right)-\sigma_Y \phi \left(\frac{y-\mu_Y}{\sigma_Y}\right)\right)}{\sigma_Y \Phi \left(\frac{y-\mu_Y}{\sigma_Y}\right)^2}+1\right)$$
If $\mu_X=\mu_Y=1$ and $\sigma_X=\sigma_Y=1$, then
$$E(X|Y<y)=-\frac{\rho \phi (y)}{\Phi (y)}$$
$$V(X|Y<y)=1-\frac{\rho ^2 \phi (y) (y \Phi (y)+\phi (y))}{\Phi (y)^2}$$
Here is the code:
(* Define bivariate normal distribution *)
d = BinormalDistribution[{μX, μY}, {σX, σY}, ρ];
(* Get constant of integration for truncated distribution *)
c = 1/Integrate[PDF[d, {x, y}], {y, -∞, y0}, {x, -∞, ∞},
Assumptions -> y0 ∈ Reals && σX > 0 && σY > 0 && μX ∈ Reals && μY ∈ Reals && -1 < ρ < 1];
(* Mean *)
ex = c Integrate[x PDF[d, {x, y}], {y, -∞, y0}, {x, -∞, ∞},
Assumptions -> y0 ∈ Reals && σX > 0 && σY > 0 && μX ∈ Reals && μY ∈ Reals && -1 < ρ < 1];
(* Expectation of X^2 *)
ex2 = c Integrate[x^2 PDF[d, {x, y}], {y, -∞, y0}, {x, -∞, ∞},
Assumptions -> y0 ∈ Reals && σX > 0 && σY > 0 && μX ∈ Reals && μY ∈ Reals && -1 < ρ < 1];
(* V(X|Y<y *)
var = ex2 - ex^2
(* Now attempt to simplify and write in terms of usual notation *)
var = var // FullSimplify
var = var /. Erfc[z_] -> 1 - Erf[z] //. Erf[Abs[z_]/(Sqrt[2] σY)] Sign[z_]^3 ->
Erf[z/(Sqrt[2] σY)]
var = var /. E^(-((y0 - μY)^2/(2 σY^2))) -> Sqrt[2 π] ϕ[(y0 - μY)/σY] /.
Erf[z_] -> -1 + 2 Φ[Sqrt[2] z] /. y0 -> y // FullSimplify
(* E(X|Y<y) *)
expectation = ex /. E^(-((y0 - μY)^2/(2 σY^2))) -> Sqrt[2 π] ϕ[(y0 - μY)/σY] /.
Erf[z_] -> -1 + 2 Φ[Sqrt[2] z] /. y0 -> y // FullSimplify
Best Answer
As shown in this article (pp. 5-6), instead of focusing immediately on $\text{N}(0,1)$ the desired result can be obtained as a corollary to the following theorem:
Then
Proof of corollary:
$$\begin{align}\mathbb{V}(X\mid a<X<b)&=\mathbb{V}\left(X-{a+b\over 2}\ \ {\LARGE\mid}\ \ a<X<b\right)\\ &=\mathbb{V}\left(X-{a+b\over 2}\ \ {\LARGE\mid}\ \ a-{a+b\over 2}<X-{a+b\over 2}<b-{a+b\over 2}\right)\\ &=\mathbb{V}\left(X-{a+b\over 2}\ \ {\LARGE\mid}\ \ -{b-a\over 2}<X-{a+b\over 2}<{b-a\over 2}\right)\\ &< 1 \end{align}$$ where the last line follows from the theorem, with $Y=X-{a+b\over 2},\ \ \eta=\mu-{a+b\over 2}$ and $c={b-a\over 2}.$
Proof of theorem: (See p. 6 of the linked article.) Sketch: Because the normal distribution is a member of the exponential family of distributions, it is straighforward to show that $$\begin{align}\mathbb{V}(Y\mid -c<Y<c)&={d\over d\eta}\mathbb{E}(Y\mid -c<Y<c)\\ &=1-{d\over d\eta}h(\eta)\\ \end{align}$$ where $$h(\eta)={\phi(\eta-c)-\phi(\eta+c)\over \Phi(\eta+c)-\Phi(\eta-c)}. $$ The result follows upon observing that $h$ is a continuous odd function, increasing in $\eta$.
Aside: The above-linked article conjectures that $\mathbb{V}(X\mid\alpha<X<\beta)<\mathbb{V}(X)$ holds for any distribution if $0<\mathbb{P}(\alpha<X<\beta)<1$. I show, however, that the conjecture can fail spectacularly for Lognormal distributions.