You have $X_1, X_2, \dots, X_n$ are iid from an unknown distribution with mean (say) $\mu$ and variance (say) $\sigma^2$.
$\bar{X}$ is an unbiased estimator of the mean, and thus $E(\bar{X}) = \mu$. Also, $Var(\bar{X}) = \sigma^2/n$. Thus since,
\begin{align*}
E[\bar{X}^2] & = Var(\bar{X}) + E[\bar{X}]^2\\
& = \dfrac{\sigma^2}{n} + \mu^2.
\end{align*}
You can now figure out what the bias is. Clearly, $\bar{X}^2$ is a horrible estimator for $\sigma^2$. As wolfies pointed you, you will do better with $n\bar{X}^2$.
Given that $n_1$ is a random variable (as pointed out already in the comments), the expected value can be computed as $E(\hat\mu)=E_{n_1}[E_{\hat \mu}(\hat\mu|n_1)]$. For the inner expectation, note that one can't just write
$E_{\hat \mu}(\hat\mu|n_1)=\frac{1}{n_1}\sum_{X_i>1}E(X_i),$ because the expected value cannot depend on specific values of certain $X_i$, as would be required for the sum. So $$E_{\hat \mu}(\hat\mu|n_1)=\frac{1}{n_1}E\left[\sum_{X_i>1} X_i|n_1\right].$$ For given $n_1$, we can write, with appropriate renumbering of indexes, $\sum_{X_i>1} X_i=\sum_{j=1}^{n_1} X_j^*$, where $X_j^*$ are random variables distributed according to a truncated normal distribution between $a=1$ and $b=\infty$. Let $E_{\mu,\sigma^2,a,b}X$ denote the expectation of such a truncated normal. For $a=1, b=\infty,$ $$E_{\mu,\sigma^2,1,\infty}X=\mu+\frac{\varphi\left(\frac{1-\mu}{\sigma}\right)}{1-\Phi\left(\frac{1-\mu}{\sigma}\right)}\sigma=t>\mu,$$ see https://en.wikipedia.org/wiki/Truncated_normal_distribution . Conditioning on $n_1$, we have $$E_{\hat \mu}(\hat\mu|n_1)=\frac{1}{n_1}\sum_{j=1}^{n_1} E(X_j^*)=\frac{1}{n_1}n_1 E_{\mu,\sigma^2,1,\infty}(X)=t>\mu.$$ This does not depend on $n_1$ (unless $n_1=0$, in which case the sum is empty and $E_{\hat \mu}(\hat\mu|n_1=0)=0$), so ultimately $$E(\hat \mu)=P\{n_1>0\}t.$$
This is $>\mu$ (bias!) if $\mu\le 0$, and also if $P\{n_1=0\}$ is small enough that $P\{n_1>0\}t>\mu$, which should hold unless $n$ is very small (potentially resulting in a large $P\{n_1=0\}$, the value of which is given in Xi'an's solution).
PS: I corrected this seeing Xi'an's solution, who got a thing right that I had forgotten about. That solution is perfectly right as far as I can see, however my different way of getting there may also help.
PPS: I take $\hat \mu=0$ in case $n_1=0$, which isn't entirely clear in the question.
Best Answer
Suppose $X_1, \dots, X_N \sim (\mu, \sigma^2)$. Then observe that $$\mu^2 = \mu_2 + \sigma^2$$ where I am using $\mu_2$ to represent the second moment $\mathbb{E}[X^2]$.
Then, a well-known unbiased estimator of $\sigma^2$ is $$S^2 = \dfrac{1}{N-1}\sum_{i=1}^{N}(X_i-\bar{X})^2$$ where $\bar{X} = \dfrac{\sum_{i=1}^{N}X_i}{N}$.
Furthermore, in general, if we have a function $g$ such that $\mathbb{E}[g(X_i)] = k$ for each $i$ (i.e., the expected value is the same for each variable in the random sample), we can use $$\mathbb{E}\left[\dfrac{1}{N}\sum_{i=1}^{N}g(X_i)\right] = \dfrac{1}{N}(Nk)=k\text{,}$$ which means, that, therefore, $\dfrac{1}{N}\sum_{i=1}^{N}g(X_i)$ is an unbiased estimator of $k$. Use this to find an unbiased estimator of $\mu_2$.