Your first SE formula is correct. The second SE formula which concerns sensitivity should have the total number of positive cases in the denominator:
$$SE_\text{sensitivity} = \sqrt{ \frac{SENS(1-SENS)}{TP+FN}} $$
The logic is that sensitivity = $\frac{TP}{TP+FN}$, and the denominator in the SE formula is the same.
As @onestop pointed out in their comment methods of calculating a binomial proportion confidence interval can be used here. The method you follow is the normal approximation, however unless you have really large counts other methods like the Wilson interval will be more accurate.
Your question is easy to answer if you are not too serious about $\theta_i\in[0,1]$. Is $\theta_i\in(0,1)$ good enough? Let's say it is. Then, instead of maximizing the likelihood function $L(\theta)$ in $\theta$, you are going to do a change of variables, and instead you maximize the likelihood function $L(\alpha)=L(\theta(\alpha))$ in $\alpha$.
What's $\theta(\alpha)$, you ask? Well, if $\theta$ is a $K$ dimensional vector, then we let $\alpha$ be a $(K-1)$ dimensional vector and set:
\begin{align}
\theta_1 &= \frac{\exp(\alpha_1)}{1+\sum exp(\alpha_k)} \\
\theta_2 &= \frac{\exp(\alpha_2)}{1+\sum exp(\alpha_k)} \\
&\vdots\\
\theta_{K-1} &= \frac{\exp(\alpha_{K-1})}{1+\sum exp(\alpha_k)} \\
\theta_K &= \frac{1}{1+\sum exp(\alpha_k)} \\
\end{align}
After you substitute $\alpha$ into your likelihood function, you can maximize it unconstrained. The $\alpha$ can be any real number. The $\theta(\alpha)$ function magically imposes all your constraints on $\theta$. So, now the usual theorems proving consistency and aymptotic normality of the MLE follow.
What about $\theta$, though? Well, after you have estimated the $\alpha$, you just substitute them into the formulas above to get your estimator for $\theta$. What is the distribution of $\theta$? It is asymptotically normal with mean $\theta_0$, the true value of $\theta$, and variance $V(\hat{\theta})=\frac{\partial \theta}{\partial \alpha}' \ V(\hat{\alpha}) \frac{\partial \theta}{\partial \alpha}$.
As you say, $V(\hat{\theta})$ won't be full rank. Obviously, it can't be full rank. Why not? Because we know the variance of $\sum \hat{\theta}_i$ has to be zero---this sum is always 1, so its variance must be zero. A non-invertible variance matrix is not a problem, however, unless you are using it for some purpose it can't be used for (say to test the null hypothesis that $\sum \theta_i = 1$). If you are trying to do that, then the error message telling you that you can't divide by zero is an excellent warning that you are doing something silly.
What if you are serious about including the endpoints of your interval? Well, that's much harder. What I would suggest is that you think about whether you are really serious. For example, if the $\theta_i$ are probabilities (and that's what your constraints make me think they are), then you really should not be expecting the usual maximum likelihood procedures to give you correct standard errors.
For example, if $\theta_1$ is the probability of heads and $\theta_2$ is the probability of tails, and your dataset looks like ten heads in a row, then the maximum likelihood estimate is $\hat{\theta}_1=1$ and $\hat{\theta}_2=0$. What's the variance of the maximum likelihood estimator evaluated at this estimate? Zero.
If you want to test the null hypothesis that $\theta_1=0.5$, what do you do? You sure don't do this: "Reject null if $\left|\frac{\hat{\theta}_1-0.5}{\sqrt{\hat{V}(\hat{\theta}_1)}}\right|>1.96$" Instead, you calculate the probability that you get ten heads in a row with a fair coin. If that probability is lower than whatever significance level you picked, then you reject.
Best Answer
You have three parameters in your problem, so $\theta = (\beta_0, \beta_1, \sigma^2)$. $I(\theta)$ is a matrix and you cannot "divide by" $I(\theta)$, as in the formula in your second paragraph.
What you need instead is to take the inverse of $I(\theta)$. If you are looking for the confidence interval e.g. of $\beta_1$, you would take the element $[2,2]$ of $I(\theta)^{-1}$. and plug it in place of $1/I(\theta)$ in your formula. You would have then
$$CI_{\beta_1} = \hat\beta_1 \pm 1.96\sqrt{{I(\theta)^{22}}}$$
where $I(\theta)^{22}$ denotes the element $[2,2]$ of $I(\theta)^{-1}$.