I did a preliminary feasibility analysis of our methods and it appears possible that one may be able to tighten our $n^\epsilon$ loss to something more like $\exp( \sqrt{n} )$ in the Gaussian case, but this is still well short of what you want. The main obstacle is potential coupling between permanents of minors, which we were not able to fully avoid.
Here's the heuristic calculation. Suppose that a Gaussian $k \times k$ permanent has some distribution $P_k$. Then a Gaussian $k+1 \times k+1$ permanent $P_{k+1}$, by cofactor expansion, looks like
$$ P_{k+1} = \sum_{i=1}^{k+1} (-1)^i x_i P_k^{(i)}$$
where $x_i$ are iid Gaussians and the $P_k^{(i)}$ are copies of $P_k$ corresponding to various $k \times k$ minors of the $k+1 \times k+1$ matrix.
As the entries of the $k+1 \times k+1$ matrix are iid, the $x_i$ are independent of the $P_k^{(i)}$, and so for fixed values of $P_k^{(i)}$, we see that $P_{k+1}$ is distributed normally with mean zero and variance $\sum_{i=1}^{k+1} |P_k^{(i)}|^2$, which we can rewrite as
$$ P_{k+1} = (\sum_{i=1}^{k+1} |P_k^{(i)}|^2)^{1/2} \cdot N_{k+1}\qquad (1)$$
where $N_{k+1}$ is a standard normal random variable (independent of the $P_k^{(i)}$).
Now we come up against the key problem: the $P_k^{(i)}$ are identically distributed, but are not jointly independent, because there is a huge amount of overlap between the $k \times k$ minors. So, while heuristically one expects concentration of measure to kick in and make $(\sum_{i=1}^{k+1} |P_k^{(i)}|^2)^{1/2}$ more concentrated than any one of the $P_k^{(i)}$, we don't know how to prevent huge correlations from happening. In the worst case, all the $P_k^{(i)}$ are perfectly correlated to each other, and then (1) could become something more like
$$ P_{k+1} = (k+1)^{1/2} |P_k| \cdot N_{k+1}.$$
This multiplicative normal process would lead to $P_n$ to concentrate between $\sqrt{n!} \exp(-O(\sqrt{n}))$ and $\sqrt{n!} \exp(O(\sqrt{n}))$, as can be seen by taking logs and applying the central limit theorem.
But this worst case can't actually be the truth - among other things, it contradicts the second moment calculation. So there should be some way to prevent this correlation. Unfortunately, my paper with Van completely evades this issue - we try to get as far as we can just from the obvious fact that disjoint minors are independent from each other. This is why our bounds are significantly worse than $\exp(O(\sqrt{n}))$ from the truth.
As you say, the situation is much better for the determinant of a Gaussian iid matrix. Here, we can use the base-times-height formula to express the determinant as a product $\prod_{i=1}^n \hbox{dist}(X_i,V_i)$, where $X_i$ is the $i^{th}$ row and $V_i$ is the span of the first $i-1$ rows. With everything being Gaussian, $\hbox{dist}(X_i,V_i)^2$ is a chi-squared distribution, which is a martingale in $i$, and as a consequence one can get the determinant within $\exp(O(\sqrt{\log n}))$ of $\sqrt{(n-1)!}$, which would give you what you want. Unfortunately, there is nothing like the base-times-height formula for the permanent...
Finally, I am fairly certain that at the level of $n^{-\epsilon n}$ losses, one can replicate our paper in the Bernoulli case to the Gaussian case. I don't think the $n^{-\epsilon n}$ loss gets much better in that case, but the $\frac{1}{n^{0.1}}$ bound should improve substantially.
Best Answer
We can show that $$ \mathbb{P}\left(\sum_ia_iX_i^2\le\epsilon\sum_ia_i\right)\le\sqrt{e\epsilon} $$ so that the inequality holds with $c=1/2$ and $C=\sqrt{e}$.
For $\epsilon\ge1$ the right hand side is greater than 1, so the inequality is trivial. I'll prove the case with $\epsilon < 1$ now. Without loss of generality, we can suppose that $\sum_ia_i=1$ (just to simplify the expressions a bit). Then, for any $\lambda\ge0$, $$ \begin{align} \mathbb{P}\left(\sum_ia_iX_i^2\le\epsilon\right)&\le\mathbb{E}\left[e^{\lambda\left(\epsilon-\sum_ia_iX_i^2\right)}\right]\cr &=e^{\lambda\epsilon}\prod_i\mathbb{E}\left[e^{-\lambda a_iX_i^2}\right]\cr &=e^{\lambda\epsilon}\prod_i\left(1+2\lambda a_i\right)^{-1/2}\cr &\le e^{\lambda\epsilon}\left(1+2\lambda\right)^{-1/2}. \end{align} $$ Take $\lambda=(\epsilon^{-1}-1)/2$ to obtain $$ \mathbb{P}\left(\sum_ia_iX_i^2\le\epsilon\right)\le e^{(1-\epsilon)/2}\sqrt{\epsilon}. $$