A "sample" of $1$ is highly unusual, and therefore it is reasonable to be confused. The random variable is known to have variance $1$. The mean is not known, else we would not be doing the test of hypothesis.
As you wrote, we reject if the absolute value of the sample mean is $\ge 3$. The sample mean in this case is just the value taken on by the random variable. (The average height of a group consisting of one person is the height of that person.)
Under the hypothesis $H_0$, the absolute value is $\ge 3$ with probability $\Pr(|Z|\ge 3)$, where $Z$ is standard normal. So you will falsely reject $H_0$ with probability $\Pr(|Z|\ge 3)$.
You can calculate this number from information available in tables of the standard normal. I get about $0.0026$.
I start here with a self answer treating case (a). Maybe others join in. Case (b) is more challenging.
1. General
The PDF is given by the formula
$$h(x) = \int_{a}^b \delta (x-\{\frac{1}{r}\}) f(r) \,dr\tag{1}$$
Here $\delta(z)$ is the Dirac delta function and $a$ and $b$ are the limits of the integration region.
The procedure to deal with the fractional part was described in detail for instance in this example (Closed form of integral over fractional part $\int_0^1 \left\{\frac{1}{2}\left(x+\frac{1}{x}\right)\right\}\,dx$)
Substituting $1/r\to y$ the integral becomes ($A = \frac{1}{a}, B=\frac{1}{b}$)
$$h(x) = \int_{B}^{A}\,dy\; \delta (x-\{y\}) f(\frac{1}{y})\frac{1}{y^2} \tag{2}$$
2. Case (a)
Here $f=1, A=\infty, B=1$, hence
$$h_{a}(x) = \int_{1}^{\infty} \,dy \;\delta (x-\{y\}) \frac{1}{y^2} =\sum_{k=1}^\infty a_k\tag{a1}$$
where, setting $y=k+\xi$ whence $\{y\}=\xi$, the summands become
$$a_k = \int_{0}^1 \delta (x-\xi) \frac{1}{(k+\xi)^2} \,d\xi = \frac{\theta (1-x) \theta (x)}{(k+x)^2}\tag{a2}$$
The heaviside functions make $a_k=0$ for $x<0$ or $x>1$. We remember this and drop $\theta$ henceforth.
Performing the sum gives
$$h_{a}(x) = \sum_{k=1}^\infty \frac{1}{(k+x)^2} = \psi ^{(1)}(x+1)\tag{a3}$$
Hence the PDF of $x=\{1/r\}$ for a flat Distribution of $r$ is given by a polygamma function. Vice versa we have obtained an interesting interpretation of $\psi ^{(1)}$.
The first few moments, defined as
$$\mu(q) = \int_0^1 x^q \psi ^{(1)}(x+1) \, dx$$
are given in the format $\{q,\mu(q)\}$ by
$$\left(
\begin{array}{cc}
0 & 1 \\
1 & 1-\gamma \\
2 & \log (2 \pi)-\gamma -1 \\
3 & \frac{1}{2} (\log (8)-12 \log (A)+3 \log (\pi )-2 \gamma -1) \\
4 & \frac{1}{3} \left(\log (64)-36 \log (A)+6 \log (\pi )+\frac{9 \zeta (3)}{\pi ^2}-3 \gamma -1\right) \\
\end{array}
\right)$$
Here $A$ is Glaisher's constant, which satisfies the relation $\log(A) = \frac{1}{12} - \zeta'(-1)$
Notice that the nomalization is ok, the average (q=1) is the well known result for $\int_{0}^1 \{\frac{1}{x}\} \,dx$
Best Answer
The terminology is that $Y$ is a mixture distribution. More precisely, it is a finite mixture of the r.v.s $N_1,N_2,N_3$ with weights $0.3,0.5,0.2$. The probability density of $Y$ is then simply the mixture density $$f_Y(y)=0.3f_{N_1}(y)+0.5f_{N_2}(y)+0.2f_{N_3}(y).$$ That immediately implies your conjecture for $E(Y)$, and indeed in general one has $$E[H(Y)]=0.3E[H(N_1)]+0.5E[H(N_2)]+0.2E[H(N_3)]$$ for any function $H(\cdot)$ with finite expectated value. (For proof and applications, see the Wikipedia link above.)