$$\begin{array}{ll} \text{minimize} & f(x) + g(y)\\ \text{subject to} & xy \ge a\\ & x \ge 0\\ & y \ge 0\end{array}$$
where both $f$ and $g$ are convex quadratic functions and $a > 0$. The feasible region is convex and, since $a > 0$, also LMI-representable
$$\{ (x,y) \in \mathbb R^2 : x \geq 0 \land y \geq 0 \land x y \geq a \} = \left\{ (x,y) \in \mathbb R^2 : \begin{bmatrix} x & \sqrt{a}\\ \sqrt{a} & y\end{bmatrix} \succeq \mathrm O_2 \right\}$$
Hence, the original optimization problem can be rewritten as follows
$$\begin{array}{ll} \text{minimize} & f(x) + g(y)\\ \text{subject to} & \begin{bmatrix} x & \sqrt{a}\\ \sqrt{a} & y\end{bmatrix} \succeq \mathrm O_2\end{array}$$
Introducing optimization variables $s, t \in \mathbb R$, we rewrite the optimization problem in epigraph form
$$\begin{array}{ll} \text{minimize} & s + t\\ \text{subject to} & f(x) \leq s\\ & g(y) \leq t\\ & \begin{bmatrix} x & \sqrt{a}\\ \sqrt{a} & y\end{bmatrix} \succeq \mathrm O_2\end{array}$$
Let $f$ and $g$ be
$$f (x) := f_0 + f_1 x + f_2 x^2 \qquad\qquad\qquad g (y) := g_0 + g_1 y + g_2 y^2$$
where $f_2, g_2 > 0$ (to ensure convexity). Inequality constraints $f(x) \leq s$ and $g(y) \leq t$ can be written in LMI form, as follows
$$\begin{bmatrix} 1 & \sqrt{f_2} \, x\\ \sqrt{f_2} \, x & s - f_0 - f_1 x\end{bmatrix} \succeq \mathrm O_2$$
$$\begin{bmatrix} 1 & \sqrt{g_2} \, y\\ \sqrt{g_2} \, y & t - g_0 - g_1 y\end{bmatrix} \succeq \mathrm O_2$$
These LMIs introduce inequalities $s - f_0 - f_1 x \geq 0$ and $t - g_0 - g_1 y \geq 0$, which are redundant. Note that lines $s = f_0 + f_1 x$ and $t = g_0 + g_1 y$ are tangent to the graphs of $f$ and $g$, respectively.
Hence, we obtain a semidefinite program (SDP) in variables $x, y, s, t \in \mathbb R$
$$\begin{array}{lc} \\ \text{minimize} & s + t\\\\ \text{subject to} & \begin{bmatrix} 1 & \sqrt{f_2} \, x & & & \\ \sqrt{f_2} \, x & s - f_0 - f_1 x & & & \\ & & 1 & \sqrt{g_2} \, y & \\ & & \sqrt{g_2} \, y & t - g_0 - g_1 y & \\ & & & & x & \sqrt{a}\\ & & & & \sqrt{a} & y\end{bmatrix} \succeq \mathrm O_6\\\\\end{array}$$
which can be solved numerically using any SDP solver.
If you change coordinates to the eigenbasis of $A$ with lengths of eigenvectors equal to square roots of the corresponding eigenvalues, the objective becomes distance from a point. The constraint set is cut out by a quadratic equation (and remains such after this linear coordinate change), which by further orthonormal coordinate change (diagonalizing $\Sigma-\theta^T\theta$) and a shift you can \textbf{usually} make have equation $d_1 x_1^2+\ldots d_n x_n^2\leq1$. So your problem is equivalent, up to affine coordinate changes, to finding the closest point of the "standard quadratic" with the equation above, to some fixed point in space. If "the point in space" is inside the constraint set (i.e. in the original coordinates $x_0$ satisfies the constraint) then the distance is zero; otherwise the optimum is on the boundary, but while you can reduce the resulting constrained optimization to a single-variable problem of finding the Lagrange multiplier, see this question, for example, it seems unlikely that you will get a closed-form solution (not a guarantee, perhaps the optimal value can be found, even if finding the optimal point is harder).
Here are some details:
The initial coordinate change is from $A=U^T \Delta U=(U \Delta^{1/2})^T (U\Delta^{1/2})=M^TM$ so $(x-x_0)^TA(x-x_0)=(U \Delta^{1/2}(x-x_0))^T (U\Delta^{1/2} (x-x_0)$ and we set $y=U \Delta^{1/2}(x-x_0)$ so that the objective is $y^Ty$.
The original $x$ is recoverable as $x=(\Delta^{-1/2}U^Ty)+x_0=M^{-1}y+x_0$.
The constraint set is
$\sqrt{x^T \Sigma x} + x^T \theta \leq \varepsilon$
$\sqrt{x^T \Sigma x} \leq (\varepsilon - x^T \theta)$
Since $\Sigma$ is positive definite, the expression under the root is non-negative and this is equivalent to
$x^T \Sigma x \leq (\varepsilon - x^T \theta)^2=\varepsilon^2-2 \varepsilon x^T\theta+ x^T \theta \theta^T x$
$x^T(\Sigma - \theta \theta^T )x +2 \varepsilon x^T\theta -\varepsilon^2\leq 0$
$(M^{-1}y+x_0)^T(\Sigma- \theta \theta^T)(M^{-1}y+x_0)^T +2 \varepsilon (M^{-1}y+x_0)^T\theta -\varepsilon^2\leq 0$
Collecting terms
$$y^T(M^{-1})^T(\Sigma- \theta \theta^T) (M^{-1})y$$
$$+ y^T (M^{-1})^T (\Sigma- \theta \theta^T)x_0+ x_0^T (\Sigma- \theta \theta^T) (M^{-1}) y+ 2 \varepsilon (M^{-1}y)^T\theta$$
$$x_0^T (\Sigma- \theta \theta^T) x_0+ 2 \varepsilon x_0^T\theta -\varepsilon^2<0$$
Or $$y^T Q y + y^T\beta +c\leq 0,$$ where $Q=(M^{-1})^T(\Sigma- \theta \theta^T) (M^{-1})$. Now, $Q$ is symmetric, so $Q=V^TDV$ with orthogonal $V$, and we set $z=Vy$. The objective is still $y^Ty=z^Tz$. The constraint is now in the form
$z^TDz+z^T\gamma+k \leq 0$.
Now we pass to coordinates and play "complete the square":
$\sum D_i z_i^2 + \gamma_i z_i \leq -k$
For those $D_i$ that are non-zero we get
$D_i z_i^2 + \gamma_i z_i=D_i (z_i-\frac{\gamma_i}{2D_i})^2-(\frac{\gamma_i}{2D_i})^2=D_i w_i^2+ n_i$
So if all $D_i$ are non-zero (i.e. if $\Sigma-\theta^T\theta$ is non-singular), we have the constraint rewritten as $\sum D_i w_i^2< N$ and if $N\neq 0$ the constraint is indeed
$\sum d_i w_i^2\leq 1$ (where $d_i=D_i/N$).
The objective is $z^Tz=(w-w_0)^T(w-w_0)$ where $(w_0)_i=-\frac{\gamma_i}{2D_i}$.
So, overall we are optimizing distance from a point to a region defined by a quadratic - which can be somewhat standartized, and, depending on various parameter values, "often" completely diagonalized(in practice one would need to pay attention to various conditioning issues).
Best Answer
We have the following optimization problem in $\mathrm x \in \mathbb R^n$
$$\begin{array}{ll} \text{maximize} & \left| \mathrm a^{\top} \mathrm x - 1 \right|\\ \text{subject to} & (\mathrm x- \bar{\mathrm x})^{\top} \mathrm P^{-1}(\mathrm x - \bar{\mathrm x}) \leq 1 \end{array}$$
where
$$\left| \mathrm a^{\top} \mathrm x - 1 \right| = \begin{cases} \mathrm a^{\top} \mathrm x - 1 & \text{if } \mathrm a^{\top} \mathrm x \geq 1\\ 1 - \mathrm a^{\top} \mathrm x & \text{if } \mathrm a^{\top} \mathrm x \leq 1\end{cases}$$
Since the objective function is piecewise affine, the maximum should be attained at the boundary of the ellipsoid. Hence, let us consider the following quadratically-constrained linear program (QCLP)
$$\begin{array}{ll} \text{extremize} & \mathrm a^{\top} \mathrm x \\ \text{subject to} & (\mathrm x- \bar{\mathrm x})^{\top} \mathrm P^{-1}(\mathrm x - \bar{\mathrm x}) = 1 \end{array}$$
We define the Lagrangian
$$\mathcal L (\mathrm x, \mu) := \mathrm a^{\top} \mathrm x - \frac{\mu}{2} \left( (\mathrm x- \bar{\mathrm x})^{\top} \mathrm P^{-1}(\mathrm x - \bar{\mathrm x}) - 1 \right)$$
Taking the gradient with respect to $\rm x$ and the derivative with respect to $\mu$, we obtain
$$\begin{aligned} \mathrm a - \mu \, \mathrm P^{-1}(\mathrm x - \bar{\mathrm x}) &= 0_n\\ (\mathrm x- \bar{\mathrm x})^{\top} \mathrm P^{-1}(\mathrm x - \bar{\mathrm x}) &= 1\end{aligned}$$
Left-multiplying the first equation by $\rm P$ and re-arranging,
$$\begin{aligned} \mu (\mathrm x - \bar{\mathrm x}) &= \mathrm P \mathrm a \\ (\mathrm x- \bar{\mathrm x})^{\top} \mathrm P^{-1}(\mathrm x - \bar{\mathrm x}) &= 1\end{aligned}$$
If $\mathrm P \mathrm a \neq 0_n$, then $\mu \neq 0$ and, thus,
$$\mathrm x - \bar{\mathrm x} = \frac{1}{\mu} \mathrm P \mathrm a$$
Using the equation of the ellipsoid, we obtain
$$\mu^2 = \mathrm a^\top \mathrm P^\top \mathrm P^{-1} \,\mathrm P \,\mathrm a = \mathrm a^\top \mathrm P \,\mathrm a$$
where $\mathrm a^\top \mathrm P \,\mathrm a \geq 0$ because $\rm P$ is positive definite. Hence,
$$\mathrm x - \bar{\mathrm x} = \pm \frac{1}{\sqrt{\mathrm a^\top \mathrm P \,\mathrm a}} \mathrm P \mathrm a$$
and, thus, the minimizer and maximizer of the QCLP are
$$\begin{aligned} \mathrm x_{\min} &:= \bar{\mathrm x} - \frac{1}{\sqrt{\mathrm a^\top \mathrm P \,\mathrm a}} \mathrm P \mathrm a \\\\ \mathrm x_{\max} &:= \bar{\mathrm x} + \frac{1}{\sqrt{\mathrm a^\top \mathrm P \,\mathrm a}} \mathrm P \mathrm a\end{aligned}$$
and the maximum of the original optimization problem is
$$\begin{aligned} \max \left\{ \mathrm a^\top \mathrm x_{\max} - 1, 1 - \mathrm a^\top \mathrm x_{\min} \right\} &= \max \left\{ \mathrm a^\top \bar{\mathrm x} + \sqrt{\mathrm a^\top \mathrm P \,\mathrm a} - 1, 1 - \mathrm a^\top \bar{\mathrm x} + \sqrt{\mathrm a^\top \mathrm P \,\mathrm a} \right\}\\ &= \max \left\{ \mathrm a^\top \bar{\mathrm x} - 1, 1 - \mathrm a^\top \bar{\mathrm x} \right\} + \sqrt{\mathrm a^\top \mathrm P \,\mathrm a}\\ &= \color{blue}{\left| \mathrm a^\top \bar{\mathrm x} - 1 \right| + \sqrt{\mathrm a^\top \mathrm P \,\mathrm a}}\end{aligned}$$