After some investigation, I think I found a small (but crucial!) imprecision in what your post.
The first formula you wrote : $var(\varepsilon) = \sigma^2 (I - H)$ is actually not totally exact. The formula should be $var(\hat \varepsilon) = \sigma ^2 (I - H)$ where $\hat\varepsilon = Y - \hat\beta X$ considering the OLS estimator $\hat\beta = (X^TX)^{-1}X^TY$. Thus $\hat\sigma(I - H)$ is an estimator of the variance of the estimated residuals associated with OLS estimator. This formula does not suppose independance of the $\varepsilon_i$, just that they all have same variance $\sigma^2$. But this is not what you want! You want an estimate of the variance of the true residuals, not the estimated residuals under OLS estimation.
OLS estimator corresponds to maximum likelihood estimator under the hypothesis that residuals are i.i.d. and normal. The estimated residuals can thus be very poor estimates of the true residuals if these hypothesis are not met, and there covariance matrix can be very different from the covariance of the true residuals.
The second formula you wrote does correspond to the covariance matrix of the $\varepsilon_i$ under the hypothesis that they follow an AR(1) process.
Estimating covariance matrix of the residuals of a linear regression without any asumption cannot easily be done: you would have more unknown than datapoints... So you need to specify some form for the covariance matrix of the residuals. Supposing that they follow an AR(1) process (if this is relevent) is a way of doing so. You can also assume that they have a stationnary parametrized autocorrelation function, whose parameters you can estimate, and use it to deduce the covariance matrix.
You might find it instructive to start with a basic idea: the variance of any random variable cannot be negative. (This is clear, since the variance is the expectation of the square of something and squares cannot be negative.)
Any $2\times 2$ covariance matrix $\mathbb A$ explicitly presents the variances and covariances of a pair of random variables $(X,Y),$ but it also tells you how to find the variance of any linear combination of those variables. This is because whenever $a$ and $b$ are numbers,
$$\operatorname{Var}(aX+bY) = a^2\operatorname{Var}(X) + b^2\operatorname{Var}(Y) + 2ab\operatorname{Cov}(X,Y) = \pmatrix{a&b}\mathbb A\pmatrix{a\\b}.$$
Applying this to your problem we may compute
$$\begin{aligned}
0 \le \operatorname{Var}(aX+bY) &= \pmatrix{a&b}\pmatrix{121&c\\c&81}\pmatrix{a\\b}\\
&= 121 a^2 + 81 b^2 + 2c^2 ab\\
&=(11a)^2+(9b)^2+\frac{2c}{(11)(9)}(11a)(9b)\\
&= \alpha^2 + \beta^2 + \frac{2c}{(11)(9)} \alpha\beta.
\end{aligned}$$
The last few steps in which $\alpha=11a$ and $\beta=9b$ were introduced weren't necessary, but they help to simplify the algebra. In particular, what we need to do next (in order to find bounds for $c$) is complete the square: this is the process emulating the derivation of the quadratic formula to which everyone is introduced in grade school. Writing
$$C = \frac{c}{(11)(9)},\tag{*}$$
we find
$$\alpha^2 + \beta^2 + \frac{2c^2}{(11)(9)} \alpha\beta = \alpha^2 + 2C\alpha\beta + \beta^2 = (\alpha+C\beta)^2+(1-C^2)\beta^2.$$
Because $(\alpha+C\beta)^2$ and $\beta^2$ are both squares, they are not negative. Therefore if $1-C^2$ also is non-negative, the entire right side is not negative and can be a valid variance. Conversely, if $1-C^2$ is negative, you could set $\alpha=-c\beta$ to obtain the value $(1-C^2)\beta^2\lt 0$ on the right hand side, which is invalid.
You therefore deduce (from these perfectly elementary algebraic considerations) that
If $A$ is a valid covariance matrix, then $1-C^2$ cannot be negative.
Equivalently, $|C|\le 1,$ which by $(*)$ means $-(11)(9) \le c \le (11)(9).$
There remains the question whether any such $c$ does correspond to an actual variance matrix. One way to show this is true is to find a random variable $(X,Y)$ with $\mathbb A$ as its covariance matrix. Here is one way (out of many).
I take it as given that you can construct independent random variables $A$ and $B$ having unit variances: that is, $\operatorname{Var}(A)=\operatorname{Var}(B) = 1.$ (For example, let $(A,B)$ take on the four values $(\pm 1, \pm 1)$ with equal probabilities of $1/4$ each.)
The independence implies $\operatorname{Cov}(A,B)=0.$ Given a number $c$ in the range $-(11)(9)$ to $(11)(9),$ define random variables
$$X = \sqrt{11^2-c^2/9^2}A + (c/9)B,\quad Y = 9B$$
(which is possible because $11^2 - c^2/9^2\ge 0$) and compute that the covariance matrix of $(X,Y)$ is precisely $\mathbb A.$
Finally, if you carry out the same analysis for any symmetric matrix $$\mathbb A = \pmatrix{a & b \\ b & d},$$ you will conclude three things:
$a \ge 0.$
$d \ge 0.$
$ad - b^2 \ge 0.$
These conditions characterize symmetric, positive semi-definite matrices. Any $2\times 2$ matrix satisfying these conditions indeed is a variance matrix. (Emulate the preceding construction.)
Best Answer
Your assumptions in the beginning are: $$ Cov[\epsilon_i,\epsilon_j] = \sigma^2\rho $$ For the variables that correspond to observations from the same state.
Now when you calculate variance for each of the three components in the end, you have to take that into account, i.e. $$ V[\overline{\epsilon}_A] = V[\frac{1}{n_A}\sum_{i=1}^2\epsilon_{iA}] $$ $$ = \frac{1}{n_A^2}\sum_{i=1}^2\sum_{j=1}^2Cov(\epsilon_{iA},\epsilon_{jA}) $$ $$ = \frac{1}{n_A^2}(V[\epsilon_{1A}]+V[\epsilon_{2A}]+2\cdot Cov[\epsilon_{1A},\epsilon_{2A}]) $$ $$ = \frac{1}{2^2}(\sigma^2+\sigma^2+2\cdot\sigma^2\rho) $$ $$ = \frac{1}{2}\sigma^2(1+\rho) $$ This is the formula I used. Now I have shown you how the first element in the covariance matrix is calculated, you should be able to figure out how to calculate the other diagonal elements from here.