The original random variable $X_{t+1}$ is normally distributed. Call its distribution $P_{X_{t+1}}$
Define a function
$$g(v) = v \cdot 1_{\{v > z_t\}}$$
where $1_{\{\cdot\}}$ is the indicator function. This can also be written as:
$$g(v) = \left \{
\begin{array}{cl}
v & v > z_t \\
0 & \text{Otherwise}
\end{array}
\right .$$
Another way to phrase your question is: what is the expected value of $g(X_{t+1})$? We can write this as:
$$E[g(X_t)] = \int_{-\infty}^{\infty} g(v) P_{X_{t+1}}(v) dv$$
We know that $g(v) = 0$ for $v \le z_t$, and $g(v) = v$ for $v > z_t$. So, we can split the integral across two intervals:
$$E[g(X_t)] = \int_{-\infty}^{z_t} 0 \cdot P_{X_{t+1}}(v) dv
+ \int_{z_t}^{\infty} v \cdot P_{X_{t+1}}(v) dv$$
The first term is clearly zero, so we're left with:
$$E[g(X_t)] = \int_{z_t}^{\infty} v \cdot P_{X_{t+1}}(v) dv$$
$X_{t+1}$ is normally distributed, so we can substitute $N(\mu, \sigma^2)$ in for $P_{X_{t+1}}$
$$E[g(X_t)] = \int_{z_t}^{\infty} \frac{v}{\sigma \sqrt{2 \pi}} \exp \left [ {-\frac{(v-\mu)^2}{2 \sigma^2}} \right ] dv$$
Evaluating the integral gives the final answer:
$$
E[g(X_t)] =
\frac{\mu}{2} \left [
1 - \text{erf} \left (
\frac{z_t - \mu}{\sigma \sqrt{2}}
\right )
\right ]
+ {
\frac{\sigma}{\sqrt{2 \pi}}
\exp \left [
-\frac{(z_t - \mu)^2}{2 \sigma^2}
\right ]
}
$$
where $\text{erf}(\cdot)$ is the error function
You can check that this is correct by simulation. Draw many samples from $N(\mu, \sigma^2)$, set values less than $z_t$ to zero, then take the sample mean.
Edit (as suggested by user12):
In the case where $X_{t+1}$ has mean zero, plug $\mu = 0$ into the last equation above, to obtain:
$$
E[g(X_t)] =
\frac{\sigma}{\sqrt{2 \pi}}
\exp \left [
-\frac{z_t^2}{2 \sigma^2}
\right ]
$$
Best Answer
There are $N!$ situations, and for each $i$ (person), in only $(N-1)!$ of them , they take their own hat. This yields $(N-1)!/N!=1/N$.
Or, if you go step by step, which is harder, using total probability law, you could write
$$\begin{align}P(X_2=1)&=P(X_2=1|X_1=1)P(X_1=1)+P(X_2=1|X_1=0)P(X_1=0)\\&=\frac{1}{N-1}\frac{1}{N}+\frac{1}{N-1}\frac{N-2}{N}\\&=\frac{1}{N}\end{align}$$
For the second summand, the first person shouldn't take his/her own hat and also Person 2's hat because o/w the second person can't take its own hat, which leaves us with $N-2$ hats.
All the possibilities are finite, which makes everything trivial. So, you could count every possible combination and form up a joint probability table, which will define the joint distribution of these variables.