So given your functions are positive semidefinite, there are a number of algorithms you can use (see: https://en.wikipedia.org/wiki/Quadratic_programming#cite_note-6, citation 6). But for this problem its simple enough we don't need such techniques:
Given $x \in \mathbb{R}^2$ we wish to solve
$$ \max x^T A x \\ x^T P_1 x \le k, x^T P_2 x \le k$$
This can actually be completely concretely spelled out by letting $A = \begin{pmatrix} A_{00} & A_{01} \\ A_{10} & A_{11} \end{pmatrix}$ and
$$P_1 = \begin{pmatrix} P_{001} & P_{011} \\ P_{101} & P_{111} \end{pmatrix}$$
$$P_2 = \begin{pmatrix} P_{002} & P_{012} \\ P_{102} & P_{112} \end{pmatrix}$$
Then it follows that we wish to solve
$$ \max x_0 (A_{00} x_0 + A_{01} x_1) + x_1(A_{10} x_0 + A_{11} x_1) \\ x_0 (P_{001} x_0 + P_{011} x_1) + x_1(P_{101} x_0 + P_{111} x_1) -k \le 0 \\ x_0 (P_{002} x_0 + P_{012} x_1) + x_1(P_{102} x_0 + P_{112} x_1) -k \le 0 $$
We rearrange terms here to yield:
$$ \max A_{00} x_0^2 + (A_{01}+ A_{10} )x_1x_0 + A_{11} x_1^2 \\ P_{001} x_0^2 + (P_{011}+P_{101}) x_0x_1 + P_{111} x_1^2 -k \le 0 \\ P_{002} x_0^2 + (P_{012} +P_{102}) x_0x_1 + P_{112} x_1^2 -k \le 0 $$
We can now directly pull out the $KKT$ conditions (a generalization of lagrange multipliers)
https://en.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker_conditions
which tell us that the optimal solution $x^*$ to this problem will satisfy the conditions (assuming $f = A_{00} x_0^2 + (A_{01}+ A_{10} )x_1x_0 + A_{11} x_1^2$, $p_1 = P_{001} x_0^2 + (P_{011}+P_{101}) x_0x_1 + P_{111} x_1^2 -k$, $p_2 = P_{002} x_0^2 + (P_{012} +P_{102}) x_0x_1 + P_{112} x_1^2 -k $) :
Conditions:
$$\nabla f(x^*) = \mu_1 \nabla p_1(x^*) + \mu_2 \nabla p_2 (x^*)$$
$$ p_1(x^*) \le 0 $$
$$ p_2(x^*) \le 0 $$
$$ \mu_1 \ge 0, \mu_2 \ge 0$$
$$ \mu_1 p_1(x^*) = 0, \mu_2 p_2(x^*) = 0$$
Lets tackle the first line with the $\nabla$'s by unpacking it for our case:
$$ \begin{bmatrix} 2A_{00}x_0 + (A_{01} + A_{01})x_1 = \mu_1 (2P_{001}x_0 + (P_{011} + P_{011})x_1) + \mu_2 (2P_{002}x_0 + (P_{012} + P_{012})x_1)
\\ 2A_{11}x_1 + (A_{01} + A_{01})x_0 = \mu_1 (2P_{112}x_1 + (P_{011} + P_{012})x_0) + \mu_2 (2P_{112}x_1 + (P_{012} + P_{012})x_0)\end{bmatrix}$$
Now look at the very last 2 equations of the form $[\mu_1 p_1(x^*) = 0, \mu_2 p_2(x^*) = 0]$
We can unpack these as well to yield
$$ \mu_1 (P_{001} x_0^2 + (P_{011}+P_{101}) x_0x_1 + P_{111} x_1^2 -k) = 0 \\ \mu_2 (P_{002} x_0^2 + (P_{012} +P_{102}) x_0x_1 + P_{112} x_1^2 -k)= 0 $$
combining these four, together:
$$ \begin{bmatrix} 2A_{00}x_0 + (A_{01} + A_{01})x_1 = \mu_1 (2P_{001}x_0 + (P_{011} + P_{011})x_1) + \mu_2 (2P_{002}x_0 + (P_{012} + P_{012})x_1)
\\ 2A_{11}x_1 + (A_{01} + A_{01})x_0 = \mu_1 (2P_{112}x_1 + (P_{011} + P_{012})x_0) + \mu_2 (2P_{112}x_1 + (P_{012} + P_{012})x_0)\\ \mu_1 (P_{001} x_0^2 + (P_{011}+P_{101}) x_0x_1 + P_{111} x_1^2 -k) = 0 \\ \mu_2 (P_{002} x_0^2 + (P_{012} +P_{102}) x_0x_1 + P_{112} x_1^2 -k)= 0 \end{bmatrix} $$
We have 4 equations, and 4 unknowns $\mu_0, \mu_1, x_0, x_1$. This can now be algebraically solved for 36 possible combinations of $\mu_0, \mu_1, x_0, x_1$ select the one that maximizes your function.
Consider the following linear program - and then you write down a bilinear program.
Good news though is that you easily can solve this by bisection in $y$, meaning that
complexity is complexity of solving a linear program, times a constant which is logarithmic in the desired precision (as you half the interval for every linear program you solve)
EDIT: Good or bad news is that the problem can be solved analytically, the optimal value approaches $0$ but that is done by letting half of the $x_i$ tend to infinity. Let $x_i$ be $\mu$ for even $i$ and $0$ for odd $i$. The constraints involving $z$ then all simplify to $z\leq \mu$. The constraints will be active at optimality (otherwise $z$ can be made larger and thus $y$ smaller), hence $z = \mu$. The first constraint simplifies to $(c_1\mu \lfloor n/2 \rfloor -c_2)/\mu\leq y$, or equivalently we are minimizing $c_1 \lfloor n/2 \rfloor -c_2/\mu$. We have $\mu \geq 0$ and for the problem to make sense $c_2\leq 0$ (otherwise the problem is trivial as one could take the zero solution.)
Hence, the infimum is $c_1 \lfloor n/2 \rfloor$ which is approached when $\mu \rightarrow 0$
Best Answer
I just read the wikipedia article on QCQPs, and my impression is that a QCQP can only be NP-hard in the non-convex case. Since you specify that you have a convex QCQP, I believe the problem can be solved in polynomial time with interior point methods.
I could be mistaken though.