Straightforward method
If you'd covered a bit more you'd be there. Start by assuming the null:
$$
h=c_0\beta_0+c_1\beta_1
$$
then, as you've pointed out $c_0\hat\beta_0+c_1\hat\beta_1-h\sim\mathcal{N}(0,V)$ where $V$ is some unknown variance. So you need to find $V$. The equation you've started with is also correct:
$$
V=\text{Var}[c_0\hat\beta_0+c_1\hat\beta_1-h] = c_0^2\text{Var}[\hat\beta_0]+c_1^2\text{Var}[\hat\beta_1] +2c_0c_1\text{Cov}(\hat\beta_1,\hat\beta_2)
$$
To complete this straightforward method, one thing you could use is the variance-covariance matrix of regressors, which I'm guessing you haven't covered yet (if you have this should be straightforward, the covariance you need is just an entry). Alternatively, you can derive $\text{Cov}(\hat\beta_0,\hat\beta_1)$. To derive $\text{Cov}(\hat\beta_0,\hat\beta_1)$, note that:
$$
\hat\beta_1 = \dfrac{\sum_{i=1}^N x_i-\bar x}{\sum_{i=1}^N (x_i-\bar x)^2}y_i
\quad \text{and} \quad
\hat\beta_0 = \frac{1}{n}\sum_{i=1}^N y_i -\hat\beta_1 \bar x
$$
and figure out where you can go from there given the assumptions you've been handed. Unfortunately, you can't assume that $\text{Cov}(\bar y, \hat\beta_1)=0$. $\text{Cov}(\bar y, \hat\beta_1)$ need not be zero, as both are estimates, so their covariance may in general be non-zero. If you can justify why they should be 0, then you're done. One way to approach this would be to think about what would happen if you ran this regression on demeaned data, and go from there to a formal argument.
In the future you'll also learn it's much easier to do this kind of restriction with a Wald test or an $F$ test. This method, once you've used the variance-covariance matrix, is algebraically equivalent to a Wald test and asymptotically equivalent to an $F$ test.
Tricky method may be easier
Right now you're model is:
$$
y_i = \beta_0 + \beta_1 x_i + \epsilon
$$
Under the null, remember that $\beta_1- (h-c_0\beta_0)/c_1=0$. Try manipulating the regression equation, and redefining the variables in your regression by adding and subtracting things to the regression equation that are equal, such that you can test this exact restriction as a $t$ test on a slightly different regression formed with the same data and equivalent to your original model. While you're doing this, remember that $h$, $c_0$ and $c_1$ are known constants, not random variables. Without giving you the answer, that's a trick that could make your life easier.
The method you will arrive at if you use this trick right is also algebraically equivalent to a Wald test.
$\newcommand{\one}{\mathbf 1}\newcommand{\e}{\varepsilon}$I would just go for a linear algebra approach since then we get joint normality easily. You have $y = X\beta + \e$ with $X = (\one \mid x)$ and $\e\sim\mathcal N(\mathbf 0, \sigma^2 I)$.
We know
$$
\hat\beta = (X^TX)^{-1}X^Ty \sim \mathcal N(\beta, \sigma^2 (X^TX)^{-1})
$$
where
$$
(X^TX)^{-1} = \begin{bmatrix} n & n \bar x \\ n \bar x & x^Tx\end{bmatrix}^{-1} = \frac{1}{x^Tx - n\bar x^2}\begin{bmatrix} x^Tx/n & - \bar x \\ - \bar x & 1\end{bmatrix}.
$$
By assumption $X$ is full rank, which in this case means $x$ is not constant (since the only way to be low rank is for $x$ to be in the span of $\one$). This means $\det X^TX \neq 0$, so $\text{Cov}(\hat\beta_0, \hat\beta_1) = 0$ if and only if $\bar x = 0$ and we do indeed have bivariate normality so this is equivalent to independence.
Here's a different approach that avoids using the normal equations. We know
$$
\hat\beta_0 = \bar y - \hat\beta_1 \bar x \\
\hat\beta_1 = \frac{\text{Cov}(x,y)}{\text{Var}(x)}
$$
and we want to show $\bar x = 0 \implies \hat\beta_0 \perp \hat\beta_1$, where I'm using "$\perp$" to denote independence.
Without losing any generality I'll assume $x^Tx = 1$ (this preserves $\bar x = 0$). Then under the assumption of $\bar x = 0$ we have
$$
\hat\beta_0 = \bar y = n^{-1}\one^Ty \\
\hat\beta_1 = x^Ty - \bar y x^T\one = x^Ty.
$$
This means
$$
{\hat\beta_0 \choose \hat\beta_1} = (n^{-1}\one \mid x)^Ty
$$
so this is a linear transformation of a Gaussian and is in turn Gaussian, and the covariance matrix is proportional to
$$
(n^{-1}\one \mid x)^T(n^{-1}\one \mid x) = \begin{bmatrix} n^{-1} & 0 \\ 0 & 1\end{bmatrix}
$$
which gives us independence.
This result can be generalized by noting that $\bar x = 0$ is equivalent to having an orthogonal design matrix in this case.
Suppose now we have an $n\times p$ full column rank covariate matrix $X$ which is partitioned as $X = (Z\mid W)$ where $Z$ has orthonormal columns and $W$ is unconstrained.
If every column is orthogonal, i.e. $X=Z$, the result is easy as $X^TX = I$ so
$$
\hat\beta \sim \mathcal N(\beta, \sigma^2I).
$$
I'll prove the following more interesting result: letting $\hat\beta_A$ denote the vector of coefficients for block $A$ of $X$, the elements of $\hat\beta_Z$ are conditionally independent given $\hat\beta_W$.
This can be shown by directly computing the covariance matrix of $\hat\beta_Z \mid \hat\beta_W$ and since $\hat\beta_Z\mid\hat\beta_W$ is still multivariate Gaussian, this gives us independence. I'll take $\sigma^2 = 1$ without losing any generality.
I'll start with the full covariance matrix of $\hat\beta$, which is proportional to $(X^TX)^{-1}$. $X^TX$ is a $2\times 2$ block matrix so we can invert it as
$$
(X^TX)^{-1} = \begin{bmatrix}I & Z^TW \\ W^TZ & W^TW\end{bmatrix}^{-1} = \begin{bmatrix}
I + Z^TWA^{-1}W^TZ & -Z^TWA^{-1} \\
-A^{-1}W^TZ & A^{-1}
\end{bmatrix}
$$
where $A = W^TW - W^TZZ^TW = W^T(I-ZZ^T)W$ gives the covariance matrix of $W$ after projecting all columns into the space orthogonal to the column space of $Z$.
It is not true in general that $I + Z^TWA^{-1}W^TZ = I$, so marginally we are not guaranteed independence in the $\hat\beta_Z$. But now if we condition $\hat\beta_Z$ on $\hat\beta_W$ we obtain
$$
\text{Var}(\hat\beta_Z \mid \hat\beta_W) = I + Z^TWA^{-1}W^TZ - Z^TWA^{-1} \cdot A \cdot A^{-1}W^TZ = I
$$
so we do indeed have conditional independence.
$\square$
Best Answer
Because $(X,Y)$ has a uniform distribution over the triangle shown, the expectation of $Y$ conditional on $X$ evidently splits the lower and upper boundaries of the triangle, shown as the dotted line $y = 1 - x/2:$
That's the regression of $Y$ on $X.$ Because it happens to be a linear function, it's also the (Ordinary Least Squares, or "OLS") linear regression.
We can prove this from first principles. The density function (supported in the blue triangle of the diagram) is
$$f_{X,Y}(x,y) = 2\mathcal I(0\le x\le 1,\ 1-x \le y\le 1).$$
We will need the first and second moments, as always, so let's calculate them now. By symmetry $X$ and $Y$ have the same expectation,
$$E[Y] = E[X] = \iint x f_{X,Y}(x,y)\,\mathrm d x\, \mathrm d y = 2 \int_0^1\int_{1-x}^1 x \,\mathrm d x\, \mathrm d y = \frac{2}{3}.$$
Similarly
$$E[Y^2] = E[X^2] = \frac{1}{2}$$
and
$$E[XY] = \frac{5}{12}.$$
The least squares objective is the average squared deviation between $Y$ and $\alpha + \beta X$ for unknown parameters to be determined:
$$\Lambda = \iint (y - (\alpha + \beta x))^2\,f_{X,Y}(x,y)\,\mathrm d x\,\mathrm dy.$$
This is a differentiable function of the parameters (which can be any real numbers) and the differentiation can be carried out under the integral sign, telling us that
$$\Lambda_\alpha = 2\iint y - (\alpha + \beta x)\,f_{X,Y}(x,y)\,\mathrm dx\,\mathrm dy = 2\left(E[Y] - \alpha - \beta E[X]\right)$$
and
$$\Lambda_\beta = 2\iint x(y - (\alpha + \beta x))\,f_{X,Y}(x,y)\,\mathrm dx\,\mathrm dy = 2\left(E[XY] - \alpha E[X] - \beta E[X^2]\right).$$
Equating both with zero gives all possible critical points. Plugging in the expectations computed previously gives
$$0 = 2\left(\frac{2}{3} - \alpha - \beta \frac{2}{3}\right)$$
and
$$0 = 2\left(\frac{5}{12} - \frac{2}{3} - \beta \frac{1}{2}\right).$$
This system of linear equations (the Normal equations of OLS) has the unique solution (easily found)
$$(\alpha,\beta) = (1, -1/2),$$
as we saw in the diagram.