So given your functions are positive semidefinite, there are a number of algorithms you can use (see: https://en.wikipedia.org/wiki/Quadratic_programming#cite_note-6, citation 6). But for this problem its simple enough we don't need such techniques:
Given $x \in \mathbb{R}^2$ we wish to solve
$$ \max x^T A x \\ x^T P_1 x \le k, x^T P_2 x \le k$$
This can actually be completely concretely spelled out by letting $A = \begin{pmatrix} A_{00} & A_{01} \\ A_{10} & A_{11} \end{pmatrix}$ and
$$P_1 = \begin{pmatrix} P_{001} & P_{011} \\ P_{101} & P_{111} \end{pmatrix}$$
$$P_2 = \begin{pmatrix} P_{002} & P_{012} \\ P_{102} & P_{112} \end{pmatrix}$$
Then it follows that we wish to solve
$$ \max x_0 (A_{00} x_0 + A_{01} x_1) + x_1(A_{10} x_0 + A_{11} x_1) \\ x_0 (P_{001} x_0 + P_{011} x_1) + x_1(P_{101} x_0 + P_{111} x_1) -k \le 0 \\ x_0 (P_{002} x_0 + P_{012} x_1) + x_1(P_{102} x_0 + P_{112} x_1) -k \le 0 $$
We rearrange terms here to yield:
$$ \max A_{00} x_0^2 + (A_{01}+ A_{10} )x_1x_0 + A_{11} x_1^2 \\ P_{001} x_0^2 + (P_{011}+P_{101}) x_0x_1 + P_{111} x_1^2 -k \le 0 \\ P_{002} x_0^2 + (P_{012} +P_{102}) x_0x_1 + P_{112} x_1^2 -k \le 0 $$
We can now directly pull out the $KKT$ conditions (a generalization of lagrange multipliers)
https://en.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker_conditions
which tell us that the optimal solution $x^*$ to this problem will satisfy the conditions (assuming $f = A_{00} x_0^2 + (A_{01}+ A_{10} )x_1x_0 + A_{11} x_1^2$, $p_1 = P_{001} x_0^2 + (P_{011}+P_{101}) x_0x_1 + P_{111} x_1^2 -k$, $p_2 = P_{002} x_0^2 + (P_{012} +P_{102}) x_0x_1 + P_{112} x_1^2 -k $) :
Conditions:
$$\nabla f(x^*) = \mu_1 \nabla p_1(x^*) + \mu_2 \nabla p_2 (x^*)$$
$$ p_1(x^*) \le 0 $$
$$ p_2(x^*) \le 0 $$
$$ \mu_1 \ge 0, \mu_2 \ge 0$$
$$ \mu_1 p_1(x^*) = 0, \mu_2 p_2(x^*) = 0$$
Lets tackle the first line with the $\nabla$'s by unpacking it for our case:
$$ \begin{bmatrix} 2A_{00}x_0 + (A_{01} + A_{01})x_1 = \mu_1 (2P_{001}x_0 + (P_{011} + P_{011})x_1) + \mu_2 (2P_{002}x_0 + (P_{012} + P_{012})x_1)
\\ 2A_{11}x_1 + (A_{01} + A_{01})x_0 = \mu_1 (2P_{112}x_1 + (P_{011} + P_{012})x_0) + \mu_2 (2P_{112}x_1 + (P_{012} + P_{012})x_0)\end{bmatrix}$$
Now look at the very last 2 equations of the form $[\mu_1 p_1(x^*) = 0, \mu_2 p_2(x^*) = 0]$
We can unpack these as well to yield
$$ \mu_1 (P_{001} x_0^2 + (P_{011}+P_{101}) x_0x_1 + P_{111} x_1^2 -k) = 0 \\ \mu_2 (P_{002} x_0^2 + (P_{012} +P_{102}) x_0x_1 + P_{112} x_1^2 -k)= 0 $$
combining these four, together:
$$ \begin{bmatrix} 2A_{00}x_0 + (A_{01} + A_{01})x_1 = \mu_1 (2P_{001}x_0 + (P_{011} + P_{011})x_1) + \mu_2 (2P_{002}x_0 + (P_{012} + P_{012})x_1)
\\ 2A_{11}x_1 + (A_{01} + A_{01})x_0 = \mu_1 (2P_{112}x_1 + (P_{011} + P_{012})x_0) + \mu_2 (2P_{112}x_1 + (P_{012} + P_{012})x_0)\\ \mu_1 (P_{001} x_0^2 + (P_{011}+P_{101}) x_0x_1 + P_{111} x_1^2 -k) = 0 \\ \mu_2 (P_{002} x_0^2 + (P_{012} +P_{102}) x_0x_1 + P_{112} x_1^2 -k)= 0 \end{bmatrix} $$
We have 4 equations, and 4 unknowns $\mu_0, \mu_1, x_0, x_1$. This can now be algebraically solved for 36 possible combinations of $\mu_0, \mu_1, x_0, x_1$ select the one that maximizes your function.
Best Answer
I'll assume that $H$ is symmetric positive definite (as stated in the question).
The Lagrangian is \begin{align} L(x,\lambda) &= \frac12 x^T Hx + g^T x + \lambda^T (A^Tx - b) \\ &= \frac12 x^T H x+ x^T (A \lambda + g) - \lambda^T b. \end{align} The dual function is \begin{align} g(\lambda) &= \inf_x \, L(x,\lambda) \\ &= -\frac12 (A \lambda + g)^T H^{-1} (A \lambda + g) - \lambda^T b. \end{align}
The dual problem is to maximize $g(\lambda)$ (with no constraints on $\lambda$). Is this equivalent to the dual problem given in the question? From the constraint $Hx + g - A \lambda = 0$, it follows that $x = H^{-1}(A \lambda - g)$. Plugging into the objective function, the stated dual problem reduces to \begin{align} \text{maximize} & \quad -\frac12 (A \lambda - g)^T H^{-1}(A \lambda - g) + b^T \lambda \\ \text{subject to} & \quad \lambda \geq 0. \end{align} This is similar to the dual problem I derived, except that we have $-g$ in place of $g$ and we have an extra constraint $\lambda \geq 0$.
It seems like the dual problem given in the question is wrong.