Define $f(x, y) = \sqrt{|x + y|} - \sqrt{|x-y|}$. Then we want to prove that
\begin{align*}
\mathbb{E}[f(X, Y)] \ge 0
\end{align*}
where $X, Y$ independent and uniformly distributed over the set of values $\{x_1, \cdots, x_n\}$.
[Remark 1: I know the IMO is motivated to use techniques only available at the secondary school level, but the probabilistic notation allows me to naturally represent a bunch of summation notations in a succinct form, and can be translated easily to the summation notation in the original problem.]
First, let's observe the region in the Cartesian plane where $f(x, y) \ge a$ and $f(x, y) \le -a$, respectively:
And therefore
\begin{align*}
\mathbb{P}(f(X, Y) \ge a) &= \mathbb{P}\left(X \ge \frac{a}{2}, Y \ge \frac{a}{2}\right) + \mathbb{P}\left(X \le -\frac{a}{2}, Y \le -\frac{a}{2}\right) \\
&= \mathbb{P}\left(X \ge \frac{a}{2}\right)^2 + \mathbb{P}\left(X \le -\frac{a}{2}\right)^2
\end{align*}
and
\begin{align*}
\mathbb{P}(f(X, Y) \le a) &= \mathbb{P}\left(X \ge \frac{a}{2}, Y \le -\frac{a}{2}\right) + \mathbb{P}\left(X \le -\frac{a}{2}, Y \ge \frac{a}{2}\right) \\
&= 2\mathbb{P}\left(X \ge \frac{a}{2}\right) \left(X \le -\frac{a}{2}\right)
\end{align*}
Then, we need the lemma that
\begin{align*}
\mathbb{E}[A] = \int_0^\infty [\mathbb{P}(A \ge a) - \mathbb{P}(A \le -a)]da
\end{align*}
which can be proved by noticing that $\mathbb{E}[A] = \mathbb{E}[A^+] - \mathbb{E}[A^-]$, where $A^+ = \max(A, 0)$ and $A^- = -\min(A, 0)$ and then applying the proof for the tail sum formula. Letting $A = f(X, Y)$,
\begin{align*}
\mathbb{E}[f(X, Y)] &= \int_0^\infty[\mathbb{P}(f(X, Y) \ge a) - \mathbb{P}(f(X, Y) \le -a)]da \\
&= \int_0^\infty\left[\mathbb{P}\left(X \ge \frac{a}{2}\right) - \mathbb{P}\left(X \le -\frac{a}{2}\right)\right]^2da \\
&\ge 0
\end{align*}
Remark 2: Note this proof works for any $f(x, y) = g(|x+y|) - g(|x - y|)$ for non-decreasing $g: \mathbb{R}_{\ge 0} \rightarrow \mathbb{R}$ and general distributions for $X, Y$. For example, we also have the result
\begin{align*}
\sum_{i,j} w_i w_j \log(1 + |x_i - x_j|) \le \sum_{i,j} w_i w_j \log(1 + |x_i + x_j|)
\end{align*}
for weights $w_1, \cdots, w_n \ge 0$.
Remark 3: Completely forgot to answer the original problem. Note that if we choose $g(x) = \sqrt{x}$ and define $X, Y$ to be random variables distributed over $x_1, \cdots, x_n$ with weights $w_1, \cdots, w_n \ge 0$, then we effectively get, for $\mathbf{M} = [M_{ij}]_{ij}$,
\begin{align*}
\mathbf{w}^\intercal \mathbf{M}\mathbf{w} \ge 0
\end{align*}
for $\mathbf{w} \in \mathbb{R}^n_{\ge 0}$; we need $\mathbf{w} \in \mathbb{R}^n$ in order to get genuine positive semidefiniteness. But it appears that if we replace $\mathbb{P}$ with quasiprobability distribution (e.g. negative measures are allowed), everything in the proof still seems to go through.
If you don't care about the constant, the inequality is rather simple. For all practical purposes (i.e., up to an absolute constant factor) $1-(1-x_i)^k\asymp\min(kx_i,1)=y_i$. Also, we trivially have $\sum_i y_i\le k\sum_i x_i=k$. Now we note that $(1-x_i-x_j)^{k-1}-(1-x_i)^{k-1}(1-x_j)^{k-1}\le 0$ (just because $1-x_i-x_j\le(1-x_i)(1-x_j)$). Thus it suffices to prove that
$$
\sum_{i,j}x_ix_j[1-(1-x_i)(1-x_j)](1-x_i)^{k-1}(1-x_j)^{k-1}\le \frac Ck\sum_i x_iy_i\,.
$$
Now,$1-(1-x_i)(1-x_j)\le x_i+x_j$ and $x_i(1-x_i)^{k-1}\le\min(x_i,\frac 1k)=\frac 1ky_i$, so the LHS is bounded by
$$
\frac 1{k^2}\sum_{i,j}(x_i+x_j)y_iy_j=\frac 2{k^2}\sum_{i,j}x_iy_iy_j
\\
=\frac 2{k^2}\left[\sum_{i}x_iy_i\right]\left[\sum_{j}y_j\right]\le
\frac 2{k^2}\left[\sum_{i}x_iy_i\right]k= \frac 2k\sum_{i}x_iy_i
$$
and we are done.
This crude computation doesn't yield $C=2$ though. To get $C=2$ in this way, you just want to define $y_i=1-(1-x_i)^k$ (which is always $\le\min(kx_i,1)$, so the final computation is fine) and show that the inequality $kx_i(1-x_i)^{k-1}\le y_i$ still holds. This is also easy once you realize that $1-(1-x)^k=\int_0^x k(1-t)^{k-1}\,dt$ and you can now rewrite the above argument with this slick definition in the same way, but, of course, that is not how one would guess it in the first place, so I included the crude computation based on the simple idea to replace hard functions with equivalent simple ones :-)
Best Answer
This is certainly not an answer expected in IMO, but the proof works for far more general cases, see this posting, for instance.
Note that
$$ \frac{1}{c} \int_{0}^{\infty} \frac{1-\cos (as)}{s^{3/2}} \, \mathrm{d}s = |a|^{1/2} \quad \text{for any} \quad a \in \mathbb{R}, $$
where $c = \int_{0}^{\infty} \frac{1-\cos t}{t^{3/2}} \, \mathrm{d}t \in (0, \infty)$. This is easily proved by substituting $t = |a|s$. Then
\begin{align*} &\sum_{i,j} |x_i + x_j|^{1/2} - \sum_{i,j} |x_i - x_j|^{1/2} \\ &= \frac{1}{c} \int_{0}^{\infty} \frac{1}{s^{3/2}}\biggl( \sum_{i,j} \cos((x_i- x_j)s) - \cos((x_i + x_j)s) \biggr) \, \mathrm{d}s \\ &= \frac{1}{c} \int_{0}^{\infty} \frac{1}{s^{3/2}}\biggl( \sum_{i,j} 2\sin(x_i s)\sin(x_j s) \biggr) \, \mathrm{d}s \\ &= \frac{1}{c} \int_{0}^{\infty} \frac{2}{s^{3/2}}\biggl( \sum_{i} \sin(x_i s) \biggr)^2 \, \mathrm{d}s \\ &\geq 0. \end{align*}
Of course, it would be fun to have a more elementary solution.