An example can be given, when we have a misspecification.
Assume that we have an i.i.d. sample of size $n$ of random variables following the Half Normal distribution. The density and moments of this distribution are
$$f_H(x) = \sqrt{2/\pi}\cdot \frac 1{v^{1/2}}\cdot \exp\big\{-\frac {x^2}{2v}\big\}\\ E_H(X) = \sqrt{2/\pi}\cdot v^{1/2}\equiv \mu_x,\;\; \operatorname{Var}_H(X) = \left(1-\frac 2{\pi}\right)v$$
The log-likelihood of the sample is
$$L(v\mid \mathbf x) = n\ln\sqrt{2/\pi}-\frac n2\ln v -\frac {1}{2v}\sum_{i=1}^nx_i^2$$
The first and second derivatives with respect to $v$ are
$$\frac {\partial}{\partial v}L(v\mid \mathbf x) = -\frac n{2v} + \frac {1}{2v^2}\sum_{i=1}^nx_i^2,\;\; \frac {\partial^2}{\partial v^2}L(v\mid \mathbf x) = \frac n{2v^2} - \frac {1}{v^3}\sum_{i=1}^nx_i^2$$
So the Fisher Information for parameter $v$ is
$$\mathcal I(v) = -E\left[\frac {\partial^2}{\partial v^2}L(v\mid \mathbf x)\right] = -\frac n{2v^2} + \frac {1}{v^3}\sum_{i=1}^nE(x_i^2) = -\frac n{2v^2} + \frac {n}{v^3}E(X^2)$$
$$=-\frac n{2v^2} + \frac {n}{v^3}\left[\operatorname{Var}(X)+\left(E[X])^2\right)\right] = -\frac n{2v^2} + \frac {n}{v^3}v$$
$$\Rightarrow \mathcal I(v) = \frac n{2v^2}$$
The Fisher Information for the mean $\mu_x$ is then
$$\mathcal I (\mu_x) = \mathcal I(v) \cdot \left(\frac {\partial \mu_x}{\partial v}\right)^{-2} = \frac n{2v^2}\cdot \left(\sqrt{2/\pi}\frac 12 v^{-1/2}\right)^{-2} = \frac {n\pi}{v}$$
and so the Cramer-Rao lower bound for the mean is
$$CRLB (\mu_x) = \left[\mathcal I (\mu_x)\right]^{-1} = \frac {v}{n\pi}$$
Assume now that we want to estimate the mean using maximum-likelihood, but we make a mistake: we assume that these random variables follow an Exponential distribution with density
$$g(x) = \frac 1{\beta}\cdot \exp\big\{-(1/\beta)x\big\}$$
The mean here is equal to $\beta$, and the maximum likelihood estimator will be
$$\hat \beta_{mMLE} = \hat E(X)_{mMLE} = \frac 1n\sum_{i=1}^nx_i$$
where the lowercase $m$ denotes that this estimator is based on a misspecified density.
Nevertheless, its moments should be calculated based using the true density that the $X$'s actually follow. Then we see that this is an unbiased estimator, since
$$E_H[\hat E(X)_{mMLE}] = \frac 1n\sum_{i=1}^nE_H[x_i] = E_H(X) = \mu_x$$
while its variance is
$$\operatorname{Var}(\hat E(X)_{mMLE}) = \frac 1n\operatorname{Var}_H(X) = \frac 1n\left(1-\frac 2{\pi}\right)v$$
This variance is greater than the Cramer-Rao lower bound for the mean because
$$ \operatorname{Var}(\hat E(X)_{mMLE}) = \frac 1n\left(1-\frac 2{\pi}\right)v > CRLB (\mu_x) = \frac {v}{n\pi} $$
$$\Rightarrow 1-\frac 2{\pi} > \frac {1}{\pi} \Rightarrow 1 > \frac 3{\pi}$$
which holds. So we have an MLE which is unbiased but does not attain the Cramer-Rao lower bound for the magnitude that it estimates. Its efficiency is
$$\frac {CRLB (\mu_x)}{\operatorname{Var}(\hat E(X)_{mMLE})} = \frac {\frac {v}{n\pi}}{\frac 1n\left(1-\frac 2{\pi}\right)v} = \frac 1{\pi - 2} \approx 0.876$$
Note that the MLE for the mean under the correct specification is biased, with a downward bias.
Best Answer
The Cramer-Rao lower bound can never be violated if all of the following conditions are verified:
$(i) \quad\Theta$ is an open interval in $\mathbb{R}$
$(ii) \quad\{(x_1,...,x_n)\in \mathbb{X}| \ f_\theta(x_1,...,x_n)>0\} \ \ does \ not \ depend \ on \ \theta$
$(iii) \quad \frac{\partial}{\partial\theta}f_\theta(x_1,...,x_n) \ exists \ for \ each \ \ (x_1,...,x_n) \in \mathbb{X} \ \ and \ \ \theta \in \Theta$
$(iv) \quad \int_{\mathbb{X}} \frac{\partial}{\partial\theta}f_\theta(x_1,...x_n)dx_1...dx_n=0$
Of course, $(ii)$ will not be verified in those distribution families in which the support depends on $\theta$; like the uniform distribution in $(0,\theta)$ you just mentioned above.
Note that in practice the other conditions are straightforward and require no verification, as we deal with smooth functions: note for example that $(iv)$ is equivalent to say that the identity:
$$\int_{\mathbb{X}}f_\theta(x_1,...,x_n)dx_1...dx_n=1$$
can be differentiated under the integral operator