This is slightly too long to be comment, though this is not an answer:
Edit: I am providing a few revisions, but I still have not found a proof.
My previous comment was not quite correct. I "rearranged" the inequality by raising both sides of the inequality to certain powers, and I did not take into account that we do not know a priori whether any of the terms are larger or smaller than $1$. Depending on the size of each term (in particular, whether it is smaller or larger than $1$), exponentiating both sides could potentially reverse the inequality. I have returned the question to its original form as suggested by the OP, and I have added one more observation.
Let me first say that I have been working on this question for a bit, and though I have not yet resolved it, I have been having fun trying!
Now, to emphasize the dependence on $n$, let's set
$$
\alpha_n = \sum_{i=1}^n a_i \qquad \beta_n = \sum_{i=1}^n b_i, \qquad \sigma_n = \sum_{i=1}^n c_i,
$$
where $c_i = \sqrt{a_i b_i}$. Further, let's put
$$
A_n = \prod_{i=1}^n (a_i)^{a_i}, \qquad B_n = \prod_{i=1}^n (b_i)^{b_i}, \qquad S_n = \prod_{i=1}^n (c_i)^{c_i}.
$$
Our goal is to show:
\begin{equation}
(1) \hspace{1in} \left(\frac{A_n}{(\alpha_n)^{\alpha_n}}\right)^{\frac{1}{\alpha_n}} \cdot \left(\frac{B_n}{(\beta_n)^{\beta_n}} \right)^{\frac{1}{\beta_n}} \leq \left(\frac{S_n}{(\sigma_n)^{\sigma_n}}\right)^{\frac{2\sigma_n}{\alpha_n \beta_n}}
\end{equation}
A few pedestrian observations:
If $a_i = b_i$ for $i = 1, \dots , n$ (which forces $c_i = a_i = b_i$), then $A_n = B_n = S_n$, we also have $\alpha_n = \beta_n = \sigma_n$, and (1) holds in this case.
Note that $2c_i \leq a_i + b_i$ as $2xy \leq x^2 + y^2$ for all real numbers $x, y$. Hence, $2\sigma_n \leq \alpha_n + \beta_n$. Furthermore, Cauchy-Schwarz gives $\sigma_n^2 \leq \alpha_n \beta_n$. Both of these observations imply that $(\sigma_n + 1)^2 \leq (\alpha_n + 1)(\beta_n + 1)$.
I would imagine that with enough creativity, one may find a proof of the inequality involving convexity or a simple application of the AM-GM inequality (which I suppose is much the same thing!).
I have been unable to prove the inequality even in the case $n = 2$, when I assume $\alpha_n = \beta_n = 1$. I am not hopeful for a proof of the general case.
EDIT (2019-06-19): Here is a complete result.
Recall from the classical proof of the Cauchy-Schwarz-inequality, that there exists the equality
$$
\sum_{i=1}^n \lambda_ix_i^2 - \frac{(\sum_{i=1}^n\lambda_ix_iy_i)^2}{\sum_{i=1}^n\lambda_iy_i^2} = \sum_{i=1}^n \lambda_i \left[ x_i - a y_i \right]^2
$$
with
$$
a = \frac{\sum_{i=1}^n\lambda_ix_iy_i}{\sum_{i=1}^n\lambda_iy_i^2}
$$
Hence the claim can be rewritten (introducing $F(x_i,y_i,\lambda_i)$) as:
$$
\lambda_1 \le F(x_i,y_i,\lambda_i) =\sum_{i=1}^n \lambda_i \left[ x_i - a y_i \right]^2 + \left(\sum_{i=1}^n x_iy_i\right)^2 \le \lambda_n.
$$
For the left inequality, we note that $0 \le \lambda_1 \le \lambda_i$ and $0 \le \lambda_1 \le 1$,
hence it suffices to show:
\begin{align}
\lambda_1& \le \lambda_1 \sum_{i=1}^n \left[ x_i - a y_i \right]^2 + \lambda_1 \left(\sum_{i=1}^n x_iy_i\right)^2 \\
\Longleftrightarrow 1 &\le \sum_{i=1}^n \left[ x_i - a y_i \right]^2 + \left(\sum_{i=1}^n x_iy_i\right)^2 \\
&= \sum_{i=1}^n \left[ x_i^2 - 2 a x_iy_i + a^2y_i^2\right] + \left(\sum_{i=1}^n x_iy_i\right)^2
\\
&=1 + a^2 -2a \sum_{i=1}^n x_iy_i +\left(\sum_{i=1}^n x_iy_i\right)^2
\\
&=1 + (a - \sum_{i=1}^n x_iy_i )^2
\end{align}
and this establishes the left inequality.
For the right inequality we do the following. Let $\sum_{i=1}^n x_iy_i = q$. Replace $x_i$ with $x_i = q y_i + n_i$. The reason to call the new variable $n_i$ is that $\sum_{i=1}^n y_i n_i = \sum_{i=1}^n y_i (x_i - q y_i) = q - q \sum_{i=1}^n y_i^2 = 0$, so the $(n_i)$ can be understood as the vector component of the vector $x$ which is normal (hence the n) to the $y$-vector. We have $\sum_{i=1}^n n_i^2 = \sum_{i=1}^n (x_i - q y_i)^2 = 1 - 2q^2 +q^2 = 1 - q^2$, which will be used below.
With this replacement, the expression in question becomes
\begin{align}
F(x_i,y_i,\lambda_i)
&= \sum_{i=1}^n \lambda_i(q y_i + n_i)^2 + q^2 - \frac{(\sum_{i=1}^n\lambda_i(q y_i + n_i)y_i)^2}{\sum_{i=1}^n\lambda_iy_i^2}\\
&= q^2\sum_{i=1}^n \lambda_i y_i ^2 + 2q \sum_{i=1}^n \lambda_i y_i n_i + \sum_{i=1}^n \lambda_i n_i^2 + \\
&\qquad + q^2- \frac{(q \sum_{i=1}^n\lambda_i y_i^2 + \sum_{i=1}^n\lambda_in_iy_i)^2}{\sum_{i=1}^n\lambda_iy_i^2}\\
&= q^2\sum_{i=1}^n \lambda_i y_i ^2 + 2q \sum_{i=1}^n \lambda_i y_i n_i + \sum_{i=1}^n \lambda_i n_i^2 + \\
&\qquad + q^2- q^2 \sum_{i=1}^n\lambda_i y_i^2 - 2q \sum_{i=1}^n\lambda_in_iy_i -\frac{(\sum_{i=1}^n\lambda_in_iy_i)^2}{\sum_{i=1}^n\lambda_iy_i^2}\\
&= \sum_{i=1}^n \lambda_i n_i^2 + q^2 -\frac{(\sum_{i=1}^n\lambda_in_iy_i)^2}{\sum_{i=1}^n\lambda_iy_i^2}
\end{align}
Structurally, this looks strikingly similar to the original formulation. The difference (which we will exploit) is that the vector $(n_i)$ has a relation to the $q$, which was not present before.
Replacing $q^2 = 1 - \sum_{i=1}^n n_i^2 $ (we had computed that already above) and bounding $\frac{(\sum_{i=1}^n\lambda_in_iy_i)^2}{\sum_{i=1}^n\lambda_iy_i^2} \ge 0$ gives
\begin{align}
F(x_i,y_i,\lambda_i) &\le \sum_{i=1}^n \lambda_i n_i^2 + 1 - \sum_{i=1}^n n_i^2
\\ &\le \lambda_n \sum_{i=1}^n n_i^2 + 1 - \sum_{i=1}^n n_i^2
\\
&= 1 + (\lambda_n - 1)\sum_{i=1}^n n_i^2
\end{align}
Further, we have that $\lambda_n - 1 \ge 0 $ and $\sum_{i=1}^n n_i^2 = 1 -q^2 \le 1$, so we can conclude
$$
F(x_i,y_i,\lambda_i) \le 1 + (\lambda_n - 1) = \lambda_n
$$
which is the desired result for the right inequality. This completes the proof. $\qquad \square$
Some interpretation: The bounding $\frac{(\sum_{i=1}^n\lambda_in_iy_i)^2}{\sum_{i=1}^n\lambda_iy_i^2} \ge 0$ gives rise to the conclusion that this term is "not important" so it could be bounded away. This is the case, since without the $\lambda_i$, this term would become zero, as $\sum_{i=1}^n n_iy_i = 0$. Indeed, few computer simulations show that, for various choices of the $\lambda_i$, the maximum of the expression in question will be obtained when the vector $x$ is chosen almost perfectly perpendicular to the vector $y$, hence $x \simeq n$, which means that the discussed bounding is "save" as the term becomes small and thus doesn't produce much difference to the true result.
Best Answer
Another way.
From my previous post we need to prove that: $$\sum_{r=1}^na_r\sqrt{\frac{n-1}{n-a_r}}\geq\sum_{r=1}\sqrt{a_r},$$ where $\sum\limits_{r=1}^na_r=n$ and $a_r>0$.
Now, by C-S $$n=\sqrt{n^2}=\sqrt{\sum_{r=1}^n1\sum_{r=1}^n}a_r\geq\sum_{r=1}^n\sqrt{a_r}$$ and by Holder we obtain: $$\sum_{r=1}^na_r\sqrt{\frac{n-1}{n-a_r}}=\sqrt{\frac{(n-1)\left(\sum\limits_{r=1}^n\frac{a_r}{\sqrt{n-a_r}}\right)^2\sum\limits_{r=1}^na_r(n-a_r)}{\sum\limits_{r=1}^na_r(n-r)}}\geq$$ $$\geq\sqrt{\frac{(n-1)\left(\sum\limits_{r=1}^na_r\right)^3}{\sum\limits_{r=1}^na_r(n-a_r)}}=\sqrt{\frac{(n-1)n^3}{\sum\limits_{r=1}^n(na_r-a_r^2-n+1)+n(n-1)}}=$$ $$=\sqrt{\frac{(n-1)n^3}{\sum\limits_{r=1}^n(a_r-1)(n-1-a_r)+n(n-1)}}=$$ $$=\sqrt{\frac{(n-1)n^3}{\sum\limits_{r=1}^n(a_r-1)(n-1-a_r-n+2)+n(n-1)}}=$$ $$=\sqrt{\frac{(n-1)n^3}{-\sum\limits_{r=1}^n(a_r-1)^2+n(n-1)}}\geq\sqrt{\frac{(n-1)n^3}{n(n-1)}}=n\geq\sum_{r=1}^n\sqrt{a_r}$$ and we are done.