$$\hat{\rho} \approx \frac{2\overline{x}}{\left(s_x^*\right)^2} +1$$
$$\Rightarrow \frac 12 (\hat{\rho} -1) \approx \frac{\overline{x}}{\left(s_x^*\right)^2} = [\left(s_x^*\right)^2]^{-1}\overline{x} = \Big(\frac{1}{T}\sum_{t=1}^T (x_t - \overline{x})^2\Big)^{-1}\cdot \Big(\frac{1}{T} \sum_{t=1}^T x_t\Big)$$
If we can assume that the process $\{X_t\}$ is weakly stationary, with mean $\mu_x$ and variance $\sigma^2_x$ then, by applying the relevant CLT we can quickly arrive at
$$\hat{\rho} \sim_{approx} N\left(1+ \frac {2\mu_x}{\sigma^2_x}, \frac 4{\sigma^2_xT}\right) $$
for "large samples".
ADDENDUM
To clarify some questions that emerged in the comments:
Assuming conditions for CLT holds, then define the variable $W=\bar x -\mu_x$ and obtain
$$\sqrt TW \rightarrow_d N(0, \sigma^2_x)$$
and then as an approximation, we have
$$W \sim_{approx} N(0, \sigma^2_x/T)$$
for large samples. Now substitute in the equation for $\hat \rho$ the expression $\bar x = W+ \mu_x$ to obtain
$$\hat{\rho} \approx \frac{2}{\left(s_x^*\right)^2}(W+ \mu_x) +1 = \frac{2}{\left(s_x^*\right)^2}W+ \Big(\frac{2}{\left(s_x^*\right)^2}\mu_x +1\Big)$$
which, given the approximate distributional result on $W$, gives you the approximate distribution for $\hat \rho$, for large samples, using of course Slutsky as pointed out by @mpiktas. And yes, these results are full of approximations.
(I am going to fully answer this home study, because it appears that the OP is rather away from any path that could be steered in the right direction by simple hints).
It can be shown that the MLE for $\theta$ is the minimum-order statistic,
$$\hat \theta_n = X_{1:n}$$
In fact, one can derive that $\hat \theta_n$ follows itself a Pareto distribution with scale parameter $\theta$ and (since here the original shape parameter is $1$), with shape parameter $n$, i.e. with distribution function
$$F(\hat \theta_n) = 1- \left(\frac {\hat \theta_n}{\theta}\right)^{-n}$$
So in this case we have available the finite sample distribution of the ML estimator.
Still, the OP is asked to consider asymptotics for the function $\sqrt {n} (\hat \theta _n - \theta)$.
Subtracting $\theta$ does not make the distribution centered at zero or anything like that, since we always have
$$\hat \theta_n = X_{1:n} \geq \min {X}= \theta$$
By this observation alone we know that even if $\sqrt {n} (\hat \theta _n - \theta)$ has a limiting distribution, it won't be the normal.
Now, $(\hat \theta_n - \theta)$ has an exact Lomax distribution, with the same parameters as the distribution of $\hat \theta_n$. The Variance of the Lomax distribution is the same as the variance of the Pareto distribution so in our case
$$\text{Var}[(\hat \theta_n - \theta)] = \frac {\theta^2 n}{(n-1)^2(n-2)}$$
Note that the denominator has leading term $n^3$. So to stabilize this variance as $n$ goes to infinity, so that it neither explodes nor goes to zero, one needs to multiply the variable by $n$, not $\sqrt {n}$.
$$\text{Var}[n(\hat \theta_n - \theta)] = \frac {\theta^2 n^3}{(n-1)^2(n-2)} \to \theta^2$$
So $\sqrt {n} (\hat \theta _n - \theta)$ goes to the constant zero (convergence of the estimator is faster than usual and multiplying by $\sqrt {n}$ "is not enough" to maintain a distribution, we need to multiply by $n$). Standardized by its variance (which goes to zero), it explodes.
Can we obtain the limiting distribution of $Z_n = n(\hat \theta_n - \theta)$?
We have for the distribution function
$$F_{Z_n}(z) = \text{Prob}(Z_n \leq z) = \text{Prob}(n(\hat \theta_n - \theta) \leq z) = \text{Prob}\left(\hat \theta_n \leq \theta + \frac {z}{n} \right)$$
$$ \implies F_{Z_n}(z) = 1- \left(\frac {\theta + \frac {z}{n}}{\theta}\right)^{-n} = 1- \left(1+ \frac {(z/\theta)}{n}\right)^{-n}$$
$$\implies \lim_{n \to \infty}F_{Z_n}(z) = 1 - \exp\{-z/\theta\}$$
which is the CDF of the Exponential distribution, with expected value $\theta$ and variance $\theta^2$.
Best Answer
Part of the sufficient conditions for asymptotical normality of the MLE is that all models in the family have the same support. This fails in your example because it is $(\theta,\infty)$.
In particular this means that $\hat \theta_n = X_{(1)} > \theta$ and so $Y_n > 0$. Thus it cannot be asymptotically normal (other than being degenerate, in case you consider that "normal").
We can compute the distribution more explicitly. As you correctly noted we have $$ \mathbb{P}(Y_n < y) = \mathbb{P}(X_{(1)} < \theta + y/\sqrt{n}). $$ (Note you can't just put $\theta + y/\sqrt{n}$ straight into the density for $X_{(1)}$ - you need to differentiate the expression above and you'd find an extra $1/\sqrt{n}$).
This comes out as $$ \mathbb{P}(Y_n < y) = 1 - \exp(-y \sqrt{n}), $$ i.e. Exponential($\sqrt{n}$). This is not asymptotically normal. The variance is $1/n$. Assuming efficiency is defined by the ratio to the Cramer Rao lower bound, we must compute the Fisher Information $I(\theta)$ and examine the ratio $$ n/I(\theta) $$
The Fisher information is $$ I(\theta) = \mathbb{E}[l'(X;\theta)^2] \\ = \mathbb{E}[ \left((d/d\theta)(n\theta - \sum_i X_i)\right)^2] = n^2. $$