Here is a geometric interpretation.
First, take two vectors in $\mathbb{R}^2$
$$\vec{\mathbb{z}}=[x,y] \,, \vec{\mathbb{w}}=[u,v]$$
For these vectors, there are two standard types of "products", the dot product
$$\vec{\mathbb{z}}\dot{}\vec{\mathbb{w}}=xu+yv$$
and the cross product*
$$\vec{\mathbb{z}}\times\vec{\mathbb{w}}=xv-yu$$
which can be interpreted as
$$\vec{\mathbb{z}}\dot{}\vec{\mathbb{w}}_\perp$$
where $\vec{\mathbb{w}}_\perp=[v,-u]$ has the same magnitude as $\vec{\mathbb{w}}$ but is orthogonal.
(*Technically this "2D cross product" is defined as $[0,0,\vec{\mathbb{z}}\times\vec{\mathbb{w}}]\equiv[x,y,0]\times[u,v,0]$.)
In terms of geometric intuition, the dot product between two vectors measures how well they align (think correlation), but also their relative magnitudes (think standard deviations), i.e.
$$\vec{\mathbb{z}}\dot{}\vec{\mathbb{w}}=||\vec{\mathbb{z}}||\,||\vec{\mathbb{w}}||\,\cos[\theta]$$
where $\theta$ is the angle between them (compare to $\sigma_{xy}=\sigma_x\sigma_y\rho_{xy}$).
Note that the dot product can also be written as $\mathbb{z}^T\mathbb{w}$, where $\mathbb{z}$ and $\mathbb{w}$ are just $\vec{\mathbb{z}}$ and $\vec{\mathbb{z}}$ written as column vectors.
Now let us do the same thing with two scalars in the complex plane ($z,w\in\mathbb{C}$), i.e.
$$z=x+iy \,, w=u+iv$$
What is the equivalent to the "dot product" here? It is actually the same as above, but now using the conjugate transpose, i.e. $z^*\equiv\bar{z}^T$ (also written as $z^\dagger$).
Since the transpose of a scalar is just that same scalar, the complex dot product is then
$$z^{\dagger}w=\bar{z}w=(x-iy)(u+iv)=(xu+yv)+i(xv-yu)$$
We can immediately notice two things. First, the complex dot product is equivalent to
$$z^{\dagger}w=(\vec{\mathbb{z}}\dot{}\vec{\mathbb{w}})+i(\vec{\mathbb{z}}\dot{}\vec{\mathbb{w}}_\perp)$$
i.e. it is a complex number whose real component is the dot product of the corresponding 2-vectors, and whose imaginary component is their cross product. Second, since $\bar{x}=x$ for $x\in\mathbb{R}$, the vector dot product we started with can be written as $\vec{\mathbb{z}}\dot{}\vec{\mathbb{w}}=\mathbb{z}^{\dagger}\mathbb{w}$ (i.e. we were really using the conjugate transpose all along).
Now for the covariance matrix.
For simplicity, let us assume that all random variables have zero mean. Then the covariance is defined as
$$\mathrm{Cov}[z,w]\equiv\mathbb{E}[\bar{z}w]$$
so we have
\begin{align}
\mathrm{Cov}[z,w] &= \mathrm{Re}[\sigma_{z,w}]+i\,\mathrm{Im}[\sigma_{z,w}] \\
&= \,\mathbb{E}[\,\vec{\mathbb{z}}\dot{}\vec{\mathbb{w}}\,]+i\,\mathbb{E}[\,\vec{\mathbb{z}}\dot{}\vec{\mathbb{w}}_\perp]
\end{align}
The real (imaginary) component of $\sigma_{z,w}$ is the expected value of the dot (cross) product of the associated vectors $\vec{\mathbb{z}}$ and $\vec{\mathbb{w}}$.
This is the main intuition. The rest of this answer is just for completeness.
If our random variable is a column vector $\mathbb{z}=[z_1,\ldots,z_n]\in\mathbb{C}^n$ with covariance matrix $\boldsymbol{\Sigma}\in\mathbb{C}^{n\times n}$, then we have
$$\Sigma_{ij}=\mathrm{Cov}[z_i,z_j]$$
Finally, if we have $m$ samples of the random variable $\mathbb{z}$, arranged as the rows of a data matrix $\boldsymbol{Z}\in\mathbb{C}^{m\times n}$, then the sample* covariance can be approximated by
$$\boldsymbol{\Sigma}\approx\tfrac{1}{m}\boldsymbol{Z}^{\dagger}\boldsymbol{Z}$$
(*yes, I divided by $m$, so you can call it the "biased" sample covariance if you must.)
You might find it instructive to start with a basic idea: the variance of any random variable cannot be negative. (This is clear, since the variance is the expectation of the square of something and squares cannot be negative.)
Any $2\times 2$ covariance matrix $\mathbb A$ explicitly presents the variances and covariances of a pair of random variables $(X,Y),$ but it also tells you how to find the variance of any linear combination of those variables. This is because whenever $a$ and $b$ are numbers,
$$\operatorname{Var}(aX+bY) = a^2\operatorname{Var}(X) + b^2\operatorname{Var}(Y) + 2ab\operatorname{Cov}(X,Y) = \pmatrix{a&b}\mathbb A\pmatrix{a\\b}.$$
Applying this to your problem we may compute
$$\begin{aligned}
0 \le \operatorname{Var}(aX+bY) &= \pmatrix{a&b}\pmatrix{121&c\\c&81}\pmatrix{a\\b}\\
&= 121 a^2 + 81 b^2 + 2c^2 ab\\
&=(11a)^2+(9b)^2+\frac{2c}{(11)(9)}(11a)(9b)\\
&= \alpha^2 + \beta^2 + \frac{2c}{(11)(9)} \alpha\beta.
\end{aligned}$$
The last few steps in which $\alpha=11a$ and $\beta=9b$ were introduced weren't necessary, but they help to simplify the algebra. In particular, what we need to do next (in order to find bounds for $c$) is complete the square: this is the process emulating the derivation of the quadratic formula to which everyone is introduced in grade school. Writing
$$C = \frac{c}{(11)(9)},\tag{*}$$
we find
$$\alpha^2 + \beta^2 + \frac{2c^2}{(11)(9)} \alpha\beta = \alpha^2 + 2C\alpha\beta + \beta^2 = (\alpha+C\beta)^2+(1-C^2)\beta^2.$$
Because $(\alpha+C\beta)^2$ and $\beta^2$ are both squares, they are not negative. Therefore if $1-C^2$ also is non-negative, the entire right side is not negative and can be a valid variance. Conversely, if $1-C^2$ is negative, you could set $\alpha=-c\beta$ to obtain the value $(1-C^2)\beta^2\lt 0$ on the right hand side, which is invalid.
You therefore deduce (from these perfectly elementary algebraic considerations) that
If $A$ is a valid covariance matrix, then $1-C^2$ cannot be negative.
Equivalently, $|C|\le 1,$ which by $(*)$ means $-(11)(9) \le c \le (11)(9).$
There remains the question whether any such $c$ does correspond to an actual variance matrix. One way to show this is true is to find a random variable $(X,Y)$ with $\mathbb A$ as its covariance matrix. Here is one way (out of many).
I take it as given that you can construct independent random variables $A$ and $B$ having unit variances: that is, $\operatorname{Var}(A)=\operatorname{Var}(B) = 1.$ (For example, let $(A,B)$ take on the four values $(\pm 1, \pm 1)$ with equal probabilities of $1/4$ each.)
The independence implies $\operatorname{Cov}(A,B)=0.$ Given a number $c$ in the range $-(11)(9)$ to $(11)(9),$ define random variables
$$X = \sqrt{11^2-c^2/9^2}A + (c/9)B,\quad Y = 9B$$
(which is possible because $11^2 - c^2/9^2\ge 0$) and compute that the covariance matrix of $(X,Y)$ is precisely $\mathbb A.$
Finally, if you carry out the same analysis for any symmetric matrix $$\mathbb A = \pmatrix{a & b \\ b & d},$$ you will conclude three things:
$a \ge 0.$
$d \ge 0.$
$ad - b^2 \ge 0.$
These conditions characterize symmetric, positive semi-definite matrices. Any $2\times 2$ matrix satisfying these conditions indeed is a variance matrix. (Emulate the preceding construction.)
Best Answer
No. The variance of the estimates of the intercept and slope don't involve their covariances at all.
You would use the covariances when dealing with some linear combination of the parameter estimates.