Suppose $\theta$ is the unknown quantity of interest. A necessary and sufficient condition for an unbiased estimator (assuming one exists) of some parameteric function $g(\theta)$ to be UMVUE is that it must be uncorrelated with every unbiased estimator of zero (assuming of course the unbiased estimator has finite second moment). We can use this result to prove uniqueness of UMVUE whenever it exists.
If possible, suppose $T_1$ and $T_2$ are both UMVUEs of $g(\theta)$.
Then $T_1-T_2$ is an unbiased estimator of zero, so that by the result above we have
$$\operatorname{Cov}_{\theta}(T_1,T_1-T_2)=0\quad,\,\forall\,\theta$$
Or, $$\operatorname{Var}_{\theta}(T_1)=\operatorname{Cov}_{\theta}(T_1,T_2)\quad,\,\forall\,\theta$$
Therefore, $$\operatorname{Corr}_{\theta}(T_1,T_2)=\frac{\operatorname{Cov}_{\theta}(T_1,T_2)}{\sqrt{\operatorname{Var}_{\theta}(T_1)}\sqrt{\operatorname{Var}_{\theta}(T_2)}}=\sqrt\frac{\operatorname{Var}_{\theta}(T_1)}{\operatorname{Var}_{\theta}(T_2)}\quad,\,\forall\,\theta$$
Since $T_1$ and $T_2$ have the same variance by assumption, correlation between $T_1$ and $T_2$ is exactly $1$. In other words, $T_1$ and $T_2$ are linearly related, i.e. for some $a,b(\ne 0)$, $$T_1=a+bT_2 \quad,\text{ a.e. }$$
Taking variance on both sides of the above equation gives $b^2=1$, or $b=1$ ($b=-1$ is invalid because that leads to $T_1=2g(\theta)-T_2$ a.e. on taking expectation, which cannot be true as $T_1,T_2$ do not depend on $\theta$). So $T_1=a+T_2$ a.e. and that leads to $a=0$ on taking expectation. Thus, $$T_1=T_2\quad,\text{ a.e. }$$
Generally finding UMVUEs can be really tedious. Just read a little bit about the concept of 'Ancillary statistics' and 'Basu's Theorem' which greatly simplifies the math. You will comfortably handle the problem.
To give you an idea
If $T$ and $V$ are the unbiased estimator and complete sufficient statistic of a parameter $\theta$ respectively, then $S=\dfrac{T}{V}$ is referred to as the the ancillary statistic of $\theta$ and the UMVUE of $\theta$ is then given by
$E(T|V)=E(T(\dfrac{V}{V})|V)=E(SV|V)=VE(S)\quad\mbox{Moreover }\quad E(S)=\dfrac{E(T)}{E(V)}$
For more information please refer to the following source
Apllications of Basu's Theorem, Boos and Hughes-Oliver (1998)
Best Answer
Joint density of $X_1,X_2,\ldots,X_n$ is
\begin{align} f_{\theta}(x_1,x_2,\ldots,x_n)&=\prod_{i=1}^n f(x_i\mid\theta) \\&=\left(\frac{2}{\theta}\right)^n \left(\prod_{i=1}^n x_i\right) \exp\left(-\frac{1}{\theta}\sum_{i=1}^n x_i^2\right)\mathbf1_{x_1,\ldots,x_n>0}\quad,\,\theta>0 \end{align}
This pdf is a member of the one-parameter exponential family.
So it follows that a complete sufficient statistic for $\theta$ is indeed
$$U(X_1,X_2,\ldots,X_n)=\sum_{i=1}^n X_i^2$$
Yes it is true that the UMVUE of $\theta$ if it exists is given by $E(T\mid U)$ where $T$ is any unbiased estimator of $\theta$. This is what the Lehmann-Scheffe theorem says. As a corollary, it also says that any unbiased estimator of $\theta$ based on a complete sufficient statistic has to be the UMVUE of $\theta$. Here this corollary comes in handy.
To make sense of the hint given, find the distribution of $Y=X^2$ where $X$ has the Rayleigh pdf you are given.
Via change of variables, the pdf of $Y$ is
\begin{align} f_Y(y)&=f(\sqrt y\mid\theta)\left|\frac{dx}{dy}\right|\mathbf1_{y>0} \\&=\frac{1}{\theta}e^{-y/\theta}\mathbf1_{y>0}\quad,\,\theta>0 \end{align}
In other words, $X_i^2$ are i.i.d Exponential with mean $\theta$ for each $i=1,\ldots,n$.
Or, $$\frac{2}{\theta}X_i^2\stackrel{\text{ i.i.d }}\sim\text{Exp with mean }2\equiv \chi^2_2$$
Thus implying $$\frac{2}{\theta}\sum_{i=1}^n X_i^2=\frac{2U}{\theta} \sim \chi^2_{2n}$$
So,
\begin{align} E_{\theta}\left(\frac{2U}{\theta}\right)=2n\implies E_{\theta}\left(\frac{U}{n}\right)=\theta \end{align}
Hence the UMVUE of $\theta$ is $$\boxed{\frac{U}{n}=\frac{1}{n}\sum_{i=1}^n X_i^2}$$
However, we did not require finding the distribution of $X_i^2$ since it is easy to show directly that $$E_{\theta}(U)=\sum_{i=1}^n \underbrace{E_{\theta}(X_i^2)}_{\theta}=n\theta$$