This is not the complete solution, but maybe helpful on your way.
What you have is the Pareto distribution with the scale parameter $x_m=1$.
First of all you have an error while computing the derivative of the log-likelihood function.
$$l(\theta) = n\ln(\theta) - (\theta+1) \sum_{i=1}^n \ln(x_i)$$
Taking the derivative with the respect to $\theta$ one can get:
$$\frac{\partial l(\theta) }{\partial \theta}=\frac{n}{\theta}-\sum_{i=1}^n \ln(x_i)$$
Second, you do not need to set it to zero (unless you are after the ML estimator). Moreover to compute the Cramer-Rao bound you need to take the second derivative:
$$\frac{\partial^2 l(\theta) }{\partial \theta^2}=-\frac{n}{\theta^2}$$
Then taking the expectation:
$$\mathrm{E}\left\{ \left(\frac{\partial l(\theta) }{\partial \theta}\right)^2\right\}=-\mathrm{E}\left\{ \frac{\partial^2 l(\theta) }{\partial \theta^2}\right\}=\frac{n}{\theta^2}\int_1^\infty \theta x^{-\theta-1} \mathrm dx=\frac{n}{\theta^2}$$
Here as Lost1 indicated we are asked to find the bound for the parameters' estimate not its' function, so: $$\mathrm var(\theta)\geq\frac{\theta^2}{n}$$
By the way if you need the ML estimator then you can set the first derivative to zero and get:
$$\hat{\theta}_{ML}:\left.\frac{\partial l(\theta) }{\partial \theta}\right\vert_{\theta=\hat{\theta}_{ML}}=0 \quad \Rightarrow \quad\hat{\theta}_{ML}=\frac{n}{\sum_{i=1}^n \ln(x_i)}$$
We have the joint pdf
$$
f(\vec x ; \theta) = \theta^n c^{\theta n} \prod_{i=1}^n x_i^{-(\theta+1)}\mathbb{1}_{x_i \ge c}
=\mathbb{1}_{x_{(1)} \ge c} \left[ \theta^n c^{\theta n} \right] \exp \left[
-(\theta+1) \sum_{i=1}^n \ln x_i\right]
$$
and so by the Exponential-Family factorization $\sum_{i=1}^n \ln X_i$ is complete & sufficient for the distribution.
For a preliminary result, consider $Y = \ln(X) - \ln(c)$ where $X$ follows from the given pareto distribution i.e. $f_X(x) = \theta c^\theta x^{-(\theta+1)} \mathbb{1}_{x \ge c}$. Then, since $X = ce^Y$, we get
$$
f_Y(y) = f_X(ce^Y) \left\vert \frac{dx}{dy} \right\vert
= \theta c^\theta (c e^Y)^{-(\theta+1)} \mathbb{1}_{ ce^Y \ge c} \cdot ce^Y = \theta e^{-y \theta} \mathbb{1}_{ y \ge 0}
$$
which is the pdf of an exponential rate = $\theta$ distribution. Define $Y_i := \ln(X_i) - \ln(c)$
It follows that $\sum_{i=1}^n Y_i = \sum_{i=1}^n (\ln X_i - \ln (c))$ follows a $\Gamma(n,\theta)$ distribution since it's the sum of $n$ independent exponential rate $\theta$ random variables. Note that the mean of an exponential rate $\theta$ r.v. is $1/\theta$ and the mean of a $\Gamma(n,\theta)$ r.v. is $n/\theta$.
So $\frac{1}{n} \sum_{i=1}^n Y_i$ is an unbiased estimator of $1/\theta$, and it's natural to guess that $1/ \left( \frac{1}{n} \sum_{i=1}^n Y_i \right)$ is an unbiased estimator of $\theta$.
Let $Z \sim \Gamma(n,\theta)$. Then, the expecation of $1/ \left( \frac{1}{n} \sum_{i=1}^n Y_i \right)$ equals:
\begin{align*}
E \left[ \frac{n}{Z} \right] &= n \int_0^\infty \frac{1}{z} \frac{1}{\Gamma(n)} \theta^n z^{n-1} e^{- \theta z} \; dz \\
&= n \int_0^\infty \frac{1}{\Gamma(n)} \theta^n z^{n-2} e^{- \theta z} \; dz \\
&= n \frac{ \theta \Gamma(n-1)}{\Gamma(n)} \int_0^\infty \frac{1}{\Gamma(n-1)} \theta^{n-1} z^{n-2} e^{- \theta z} \; dz
\end{align*}
and this equals $\theta n \dfrac{ (n-2)!}{(n-1)!}= \frac{n}{n-1} \theta$ since the rightmost integral is integrating the pdf of a $\Gamma(n-1,\theta)$ random variable over its support.
It follows from Lehmann Scheffe that $\dfrac{n-1}{n} \cdot \dfrac{1}{\frac{1}{n} \sum_{i=1}^n Y_i} = \dfrac{n-1}{\sum_{i=1}^n (\ln X_i - \ln c) }$ is the UMVUE of $\theta$.
Best Answer
By the Lehmann-Scheffe Theorem, the UMVUE is an a.s. unique function of the complete sufficient statistic, $X_{(n)}$. Here are some parts of the puzzle that you will need to piece together:
1) The density of $X_{(n)}$ is given by $$g(x|\theta) = n[c(\theta)]^nH(x)^{n-1}h(x), \;\;\; 0 < x < \theta $$
2) $H(\theta) = \dfrac{1}{c(\theta)}$
3) $E[H(X_{(n)})] = a_nH(\theta)$, where $a_n$ is a constant depending ONLY on $n$, and not $\theta$. You can calculate $a_n$ quite easily.
By Lehmann-Scheffe, you can conclude that $T = \dfrac{1}{a_n}H(X_{(n)})$ is the UMVUE.