To answer your first question, even if an unbiased estimator exists it does not guarantee that a UMVUE exists.
Consider a single observation $X$ having the uniform distribution on $(\theta,\theta+1)$ and suppose we have to estimate $g(\theta)$ for some function $g$.
So $X$ is minimal sufficient for $\theta$. As for completeness of $X$, notice that $$E_{\theta}[\sin (2\pi X)]=\int_{\theta}^{\theta+1}\sin (2\pi x)\,dx=0\quad,\,\forall\,\theta\in\mathbb R$$
However $\sin (2\pi X)$ is not almost surely $0$, so that $X$ is not a complete statistic.
In fact a complete sufficient statistic does not exist for this model.
To see whether UMVUE of $g(\theta)$ actually exists or not, recall the necessary-sufficient condition for an unbiased estimator (with finite second moment) to be the UMVUE which says that the unbiased estimator has to be uncorrelated with every unbiased estimator of zero.
If possible, suppose $T$ is UMVUE of $g(\theta)$. Let $\mathcal U_0$ be the class of all unbiased estimators of zero.
Clearly for every $H\in \mathcal U_0$,
$$\int_{\theta}^{\theta+1}H(x)\,dx=0\quad,\,\forall\,\theta\in\mathbb R$$
Differentiating both sides of the last equation with respect to $\theta$ gives
$$H(\theta+1)=H(\theta)\quad,\,\text{a.e.}\tag{1}$$
As $T$ is UMVUE, $E_{\theta}(TH)=0$ for all $\theta$ and for all $H\in \mathcal U_0$. In other words, $TH\in \mathcal U_0$ whenever $H\in \mathcal U_0$. So analogous to $(1)$ we have
$$T(\theta+1)H(\theta+1)=T(\theta)H(\theta)\quad,\,\text{a.e.}\tag{2}$$
And $(1)$ implies $$T(\theta)=T(\theta+1)\quad,\,\text{a.e.}\tag{3}$$
Again as $T$ is unbiased for $\theta$, $$\int_{\theta}^{\theta+1} T(x)\,dx=g(\theta)\quad,\,\forall\,\theta\in\mathbb R $$
Differentiating both sides wrt $\theta$ and equation $(3)$ yields
$$g'(\theta)=T(\theta+1)-T(\theta)=0\quad,\,\text{a.e.}$$
This shows that $g(\theta)$ does not admit a UMVUE for any non-constant $g$.
So if you take $g(\theta)=\theta$, then $T=X-\frac12$ is unbiased for $\theta$ but $T$ is not UMVUE.
As for the second question, even if $T$ is just an unbiased estimator (efficient or not) of $\theta$, it does not mean $g(T)$ is unbiased (forget UMVUE) for $g(\theta)$ for an arbitrary nonlinear function $g$.
Among several possible examples, consider i.i.d observations $X_1,\ldots,X_n$ having an exponential distribution with mean $\theta$. Then it is easy to verify that the sample mean $\overline X$ is an efficient estimator (and UMVUE) of $\theta$ but $\overline X^2$ is not UMVUE of $\theta^2$.
Your reasoning is correct except MLE is not the UMVUE of the population variance.
A complete sufficient statistic for $p$ is $T=\sum\limits_{i=1}^N X_i$, which has a $\mathsf{Bin}(nN,p)$ distribution.
Now $E_p[T]=nNp$ and $\operatorname{Var}_p[T]=nNp(1-p)$ for all $p\in(0,1)$.
Again, $$E_p[T^2]=\operatorname{Var}_p[T]+(E_p[T])^2=nNp(1-p)+n^2N^2p^2$$
Or, $$E_p[T^2-T]=nNp^2(nN-1)$$
That is, $$E_p\left[\frac{T(T-1)}{N(nN-1)}\right]=np^2$$
So you have an unbiased estimator of population variance based on $T$ (and hence UMVUE):
$$E_p\left[\frac TN-\frac{T(T-1)}{N(nN-1)}\right]=np-np^2=np(1-p)\quad,\forall\,p\in(0,1)$$
With $\overline X=\frac TN$, the sample variance $S^2=\frac1{N-1}\sum\limits_{i=1}^N (X_i-\overline X)^2$ is unbiased for population variance. So by Lehmann-Scheffe, $E\left[S^2\mid T\right]$ is also UMVUE of $np(1-p)$.
As UMVUE is unique whenever it exists, you can say
$$E\left[S^2\mid T\right]=\frac TN-\frac{T(T-1)}{N(nN-1)}\tag{*}$$
This can be rewritten in terms of $\overline X$ of course.
A direct way to obtain $(*)$ would be to proceed using linearity of expectation.
I think it should be something like
\begin{align}
E\left[S^2\mid T=t\right]&=E\left[\frac{1}{N-1}\sum_{i=1}^N\left(X_i-\frac tN\right)^2\mid T=t\right]
\\&=E\left[\frac{1}{N-1}\left(\sum_{i=1}^N X_i^2-\frac{t^2}{N}\right)\mid T=t\right]
\\&=\frac{1}{N-1}\sum_{i=1}^N E\left[X_1^2\mid T=t\right]-\frac{t^2}{N(N-1)}
\end{align}
Now we only have to recall that $X_1$ conditioned on $T$ has a hypergeometric distribution.
Best Answer
Suppose $\theta$ is the unknown quantity of interest. A necessary and sufficient condition for an unbiased estimator (assuming one exists) of some parameteric function $g(\theta)$ to be UMVUE is that it must be uncorrelated with every unbiased estimator of zero (assuming of course the unbiased estimator has finite second moment). We can use this result to prove uniqueness of UMVUE whenever it exists.
If possible, suppose $T_1$ and $T_2$ are both UMVUEs of $g(\theta)$.
Then $T_1-T_2$ is an unbiased estimator of zero, so that by the result above we have
$$\operatorname{Cov}_{\theta}(T_1,T_1-T_2)=0\quad,\,\forall\,\theta$$
Or, $$\operatorname{Var}_{\theta}(T_1)=\operatorname{Cov}_{\theta}(T_1,T_2)\quad,\,\forall\,\theta$$
Therefore, $$\operatorname{Corr}_{\theta}(T_1,T_2)=\frac{\operatorname{Cov}_{\theta}(T_1,T_2)}{\sqrt{\operatorname{Var}_{\theta}(T_1)}\sqrt{\operatorname{Var}_{\theta}(T_2)}}=\sqrt\frac{\operatorname{Var}_{\theta}(T_1)}{\operatorname{Var}_{\theta}(T_2)}\quad,\,\forall\,\theta$$
Since $T_1$ and $T_2$ have the same variance by assumption, correlation between $T_1$ and $T_2$ is exactly $1$. In other words, $T_1$ and $T_2$ are linearly related, i.e. for some $a,b(\ne 0)$, $$T_1=a+bT_2 \quad,\text{ a.e. }$$
Taking variance on both sides of the above equation gives $b^2=1$, or $b=1$ ($b=-1$ is invalid because that leads to $T_1=2g(\theta)-T_2$ a.e. on taking expectation, which cannot be true as $T_1,T_2$ do not depend on $\theta$). So $T_1=a+T_2$ a.e. and that leads to $a=0$ on taking expectation. Thus, $$T_1=T_2\quad,\text{ a.e. }$$