Your histograms seem to suggest that for very large $n$ the distribution concentrates around $9$ which is $\sigma^2$.
You might add a vertical line to each of the means of the variance estimates, with the first somewhere presumably near $9.47$ and the second much nearer $9$ to suggest asymptotically unbiasness. If this is difficult to visualise (the first line will be closer to $10$ than the second) then repeat the first with perhaps $n=10$ and a vertical line near $10$
A couple of hints first and then I'll put the solution underneath.
Hint 1: $\left|\hat{\Theta}_N - \Theta\right|$ (the first term in the "convergence in probability" definition) is equal to $\hat{\Theta}_N - \Theta$ since by definition, $\hat{\Theta}_N \geq \Theta$.
Hint 2: For any constant $c$, $\mathbb{P}\left(\hat{\Theta}_N > c\right)$ is the probability that the minimum of the $X_i$'s exceeds $c$. The minimum of the $X_i$'s exceeds $c$ if and only if $X_1 > c, X_2 > c, \dots, X_n > c$.
Solution:
$\hat{\Theta}_N$ converges in probability to $\Theta$ if, for any $\epsilon > 0$,
$$\lim_{N \rightarrow \infty} \mathbb{P}\left(\left|\hat{\Theta}_N - \Theta\right| > \epsilon\right) = 0.$$
Now
$$\mathbb{P}\left(\hat{\Theta}_N - \Theta > \epsilon\right) = \mathbb{P}\left(\hat{\Theta}_N > \Theta + \epsilon\right) = \mathbb{P}\left(X_1 > \Theta + \epsilon, X_2 > \Theta + \epsilon, \dots, X_N > \Theta + \epsilon \right).$$
Since the samples are independent and identically distributed,
$$\mathbb{P}\left(X_1 > \Theta + \epsilon, \dots, X_N > \Theta + \epsilon \right) = \mathbb{P}\left(X_1 > \Theta + \epsilon\right)\mathbb{P}\left(X_2 > \Theta + \epsilon\right) \cdots \mathbb{P}\left(X_N > \Theta + \epsilon \right).$$
We know that $X_i \sim \mathbb{U}[\Theta, \Theta + 1]$, so $\mathbb{P}\left(X_i > \Theta + \epsilon \right) = 1 - \epsilon$ (as long as $\epsilon < 1$, otherwise the probability is $0$).
Hence
$$\mathbb{P}\left(\hat{\Theta}_N - \Theta > \epsilon\right) = \begin{cases}
\left(1-\epsilon\right)^N & \text{if } \epsilon < 1,\\
0 & \text{if } \epsilon \geq 1.
\end{cases}.$$
Taking the limit as $N \rightarrow \infty$,
$$\lim_{N \rightarrow \infty} \mathbb{P}\left(\left|\hat{\Theta}_N - \Theta\right| > \epsilon\right) = \lim_{N \rightarrow \infty} \begin{cases}
\left(1-\epsilon\right)^N & \text{if } \epsilon < 1,\\
0 & \text{if } \epsilon \geq 1 \end{cases} = 0.$$
Best Answer
Consistency does not imply that variance goes to 0. You can find counterexamples for this:
https://stats.stackexchange.com/questions/74047/why-dont-asymptotically-consistent-estimators-have-zero-variance-at-infinity