To side-step dependencies arising when we consider the sample variance, we write
$$(n-1)s^2 = \sum_{i=1}^n\Big((X_i-\mu) -(\bar x-\mu)\Big)^2$$
$$=\sum_{i=1}^n\Big(X_i-\mu\Big)^2-2\sum_{i=1}^n\Big((X_i-\mu)(\bar x-\mu)\Big)+\sum_{i=1}^n\Big(\bar x-\mu\Big)^2$$
and after a little manipualtion,
$$=\sum_{i=1}^n\Big(X_i-\mu\Big)^2 - n\Big(\bar x-\mu\Big)^2$$
Therefore
$$\sqrt n(s^2 - \sigma^2) = \frac {\sqrt n}{n-1}\sum_{i=1}^n\Big(X_i-\mu\Big)^2 -\sqrt n \sigma^2- \frac {\sqrt n}{n-1}n\Big(\bar x-\mu\Big)^2 $$
Manipulating,
$$\sqrt n(s^2 - \sigma^2) = \frac {\sqrt n}{n-1}\sum_{i=1}^n\Big(X_i-\mu\Big)^2 -\sqrt n \frac {n-1}{n-1}\sigma^2- \frac {n}{n-1}\sqrt n\Big(\bar x-\mu\Big)^2 $$
$$=\frac {n\sqrt n}{n-1}\frac 1n\sum_{i=1}^n\Big(X_i-\mu\Big)^2 -\sqrt n \frac {n-1}{n-1}\sigma^2- \frac {n}{n-1}\sqrt n\Big(\bar x-\mu\Big)^2$$
$$=\frac {n}{n-1}\left[\sqrt n\left(\frac 1n\sum_{i=1}^n\Big(X_i-\mu\Big)^2 -\sigma^2\right)\right] + \frac {\sqrt n}{n-1}\sigma^2 -\frac {n}{n-1}\sqrt n\Big(\bar x-\mu\Big)^2$$
The term $n/(n-1)$ becomes unity asymptotically. The term $\frac {\sqrt n}{n-1}\sigma^2$ is determinsitic and goes to zero as $n \rightarrow \infty$.
We also have $\sqrt n\Big(\bar x-\mu\Big)^2 = \left[\sqrt n\Big(\bar x-\mu\Big)\right]\cdot \Big(\bar x-\mu\Big)$. The first component converges in distribution to a Normal, the second convergres in probability to zero. Then by Slutsky's theorem the product converges in probability to zero,
$$\sqrt n\Big(\bar x-\mu\Big)^2\xrightarrow{p} 0$$
We are left with the term
$$\left[\sqrt n\left(\frac 1n\sum_{i=1}^n\Big(X_i-\mu\Big)^2 -\sigma^2\right)\right]$$
Alerted by a lethal example offered by @whuber in a comment to this answer, we want to make certain that $(X_i-\mu)^2$ is not constant. Whuber pointed out that if $X_i$ is a Bernoulli $(1/2)$ then this quantity is a constant. So excluding variables for which this happens (perhaps other dichotomous, not just $0/1$ binary?), for the rest we have
$$\mathrm{E}\Big(X_i-\mu\Big)^2 = \sigma^2,\;\; \operatorname {Var}\left[\Big(X_i-\mu\Big)^2\right] = \mu_4 - \sigma^4$$
and so the term under investigation is a usual subject matter of the classical Central Limit Theorem, and
$$\sqrt n(s^2 - \sigma^2) \xrightarrow{d} N\left(0,\mu_4 - \sigma^4\right)$$
Note: the above result of course holds also for normally distributed samples -but in this last case we have also available a finite-sample chi-square distributional result.
I will try to give an intuitive example to understand why the arithmetic mean
\begin{equation} \overline x_1 = \sum_{i=1}^{n} \frac{x_i}{n}
\end{equation}
is not as good as
\begin{equation} \overline x_2 = \frac{a + b}{2}
\end{equation}
In the case where $X \sim \mathrm{unif}(\alpha,\beta)$
Imagine that you have 10 observations from $\mathrm{unif}(1,11)$
(There is a candy factory that puts a candy of 1cm, 2cm, ..., 11cm, 1cm, 2cm,..., 11cm,... in separate bags). We know from the above information that the real mean is 6 (For the factory example, the factory would have spend the same amount of sugar if each candy was 6cm)
Now, if you don't know any of the above and take a random sample to estimate that, then $\bar x_2$ would only require the smallest and the highest number to appear in your sample and that's it! It would always guess the correct answer with 0 error!
$\bar x_1$ on the other hand, would be sensitive to every single value that you get and it will "fluctuate" around the real value. Furthermore, if the highest (or equivalently the lowest) value doesn't appear in your sample, then again $\bar x_2$ will almost always be closer to the real mean compared to $\bar x_1$. $\bar x_1$ will be better only if your sample is already centralized around 6 which is less likely to happen compared to the other possible scenarios.
For the candy factory example. If you try to predict the "average candy" that is in each bag, it's better to get the average between the smallest and the largest candy you had so far than averaging the candies in every single bag you open and change your prediction (and thus error) after every bag.
Best Answer
$$ \operatorname{var}\left( \frac 1 N \sum_{i=1}^N X_i\right) = \frac 1 {N^2} \operatorname{var}\left( \sum_{i=1}^N X_i \right) = \frac 1 {N^2} \sum_{i=1}^N \operatorname{var}(X_i) = \cdots $$