As mentioned in the comments, your approach using the eigenvalue decomposition is correct. In this case, we have that
$$ Q=VDV^T$$
which can be written as
$$ Q=(VD^{1/2})(VD^{1/2})^T. $$
(As noted in the comments, other decompositions could work as well, but I'll use this one, since it's the one you mention). Your concern is that, if $Q$ is a positive definite matrix, then the matrix $L$ obtained by taking the Cholesky decomposition isn't the same as the matrix $VD^{1/2}$ obtained above. However, this doesn't mean that the resulting reformulation doesn't yield the same result in both cases.
Consider your example
$$ Q=\begin{pmatrix}2&-5/2\\-5/2&4\end{pmatrix}.$$
If we take the Cholesky decomposition, then, to a few digits of accuracy,
$$ L=\begin{pmatrix}1.414&0\\-1.768&0.935\end{pmatrix}. $$
If we take the eigenvalue decomposition, then, to a few digits of accuracy,
$$ VD^{1/2}=\begin{pmatrix}-0.459&1.338\\-0.311&-1.976\end{pmatrix}. $$
Indeed, $L\neq{VD^{1/2}}$. However, let's look at the plots of the level sets for all three formulations:
VoilĂ
Edit: A note on why this is true: essentially, we are employing this theorem:
Theorem: Let $Q\in\mathbb{R}^{n\times{n}}$ be positive semidefinite, and let $b\in\mathbb{R}^n$, $c\in\mathbb{R}$. Let $L\in\mathbb{R}^{n\times{n}}$ be any matrix such that $Q=L^TL$. Then the sets
$$ S_1=\bigg\{x\in\mathbb{R}^n\ \bigg|\ x^TQx+b^Tx+c\leqslant0\bigg\} $$
and
$$ S_2=\bigg\{x\in\mathbb{R}^n\ \bigg|\ \left\|\begin{array}{c}(1+b^Tx+c)/2\\Lx\end{array}\right\|_2\leqslant(1-b^Tx-c)/2\bigg\} $$
are equal.
This result makes no reference to the form of $L$, only that $Q=L^TL$. Hence, if there exist $L_1$ and $L_2$ such that $Q=L_1^TL_1$ and $Q=L_2^TL_2$, then the sets
$$ F_i=\bigg\{x\in\mathbb{R}^n\ \bigg|\ \left\|\begin{array}{c}(1+b^Tx+c)/2\\L_ix\end{array}\right\|_2\leqslant(1-b^Tx-c)/2\bigg\} $$
for $i=1,2$ are the same (as illustrated above), even though $L_1\neq{L_2}$.
This problem is homogeneous then calling $x_2 = \lambda x_1, x_3 = \mu x_1$ we arrive at
$$
\min(\max) \left(x_1^2\left(1+2\lambda^2+10\mu+25\mu^2\right)\right), \ \ \ \text{s. t.}\ \ \ x_1^2(1+\lambda^2+\mu^2)=k^2
$$
so we can follow without the constraint as
$$
\min(\max)_{\lambda,\mu} \frac{\left(1+2\lambda^2+10\mu+25\mu^2\right)k^2}{1+\lambda^2+\mu^2}
$$
and deriving we find the conditions
$$
\cases{
2 \lambda (10 \mu + 23 \mu^2-1) k^2=0\\
2 ( 5 \mu^2-5 - 5 \lambda^2 - 24 \mu - 23 \lambda^2 \mu) k^2=0
}
$$
giving the real solutions $(\lambda=0,\mu = -\frac 15)$ and $(\lambda=0,\mu = 5)$ with respective minimum and maximum $(0, 26k^2)$
Best Answer
You can directly brute force this problem: Given a function $f\colon\mathbb R^N \to\mathbb R, \theta\mapsto f(\theta)$ with Hessian $H=\frac{\partial^2 f}{\partial^2 \theta}$, then the quadratic form $x^T H x$ can be computed without explicitly computing $H$:
$$ x^T H x = x^T \frac{\partial^2 f}{\partial^2 \theta} x = x^T\frac{\partial}{\partial \theta} \Big(\frac{\partial f}{\partial \theta} x\Big) $$
Once you implement $f$ in a language that supports automatic differentiation such as pytorch or tensorflow, you can compute it efficiently via
At no point in this process is the matrix $H$ explicitly constructed. If you have enough RAM you can also concatenate all $x$ vectors and run everything in a single swoop (using appropriate tensorcontraction instead of inner product
.dot
).