In the arXiv paper "Schoenberg matrices of radial positive definite functions
and Riesz sequences in $L^2(\mathbb{R}^n)$" by L. Golinskii, M. Malamud and L. Oridoroga, Definition 1.1 states
Let $n \in \mathbb{N}$. A real-valued and continuous function $f$ on $\mathbb{R}_+ = [0,\infty)$ is called a radial positive definite function, if for an arbitrary finite set $\{x_1,\ldots,x_m\}$, $x_k \in \mathbb{R}^n$, and $\{\xi_1,\ldots,\xi_m\} \in \mathbb{C}^m$ $$\sum_{k,j = 1}^{n}f(|x_k-x_j|)\xi_k\overline{\xi}_j \ge 0.$$
Note that this definition is equivalent to saying that the matrix $S \in \mathbb{R}^{m \times m}$ defined by $S(k,j) = f(\|x_k-x_j\|)$ is positive semidefinite for any choice of points $x_1,\ldots,x_m \in \mathbb{R}^n$.
Also, Theorem 1.2 states the following:
A function $f \in \Phi_n$, $f(0) = 1$ if and only if there exists a probability measure $\nu$ on $\mathbb{R}_+$ such that $$f(r) = \int_{0}^{\infty}\Omega_n(rt)\nu(dt), ~~~~ r \in \mathbb{R}_+$$ where $$\Omega_n(s) := \Gamma(q+1)\left(\dfrac{2}{s}\right)^qJ_q(s) = \sum_{j = 0}^{\infty}\dfrac{\Gamma(q+1)}{j!\Gamma(j+q+1)}\left(-\dfrac{s^2}{4}\right)^j, ~~~~ q = \dfrac{n}{2}-1,$$ $J_q$ is the Bessel function of the first kind and order $q$.
(In their paper $\Phi_n$ denotes the set of radial positive definite functions on $\mathbb{R}^n$.)
Theorem 1 in "Metric Spaces and Completely Monotone Functions" by I. J. Schoenberg gives the proof of this result.
By applying this theorem for $n = 3$ and $\nu = \delta_1$, you get that $f(r) = \Omega_3(r) = \dfrac{\sin r}{r}$ is a radial positive definite function in $\mathbb{R}^3$. In other words, for any choice of $m$ points $x_1,\ldots,x_m \in \mathbb{R}^3$, the matrix $S \in \mathbb{R}^{m \times m}$ defined by $S(k,j) = \dfrac{\sin(\|x_k-x_j\|)}{\|x_k-x_j\|}$ is positive semidefinite.
It should be clear that if you choose $m$ points $x_1,\ldots,x_m$ in $\mathbb{R}$ or $\mathbb{R}^2$, then $S$ is still guaranteed to be positive semidefinite. However, that is not the case if the points are chosen in $\mathbb{R}^n$ where $n \ge 4$. joriki's answer gives a nice counterexample for $n \ge 5$. Here is an ugly counterexample (found by randomly generating points in MATLAB) with $6$ points in $\mathbb{R}^4$:
$x_1 = (-0.7,0.8,-0.3,-0.4)$
$x_2 = (0.8,-0.8,-0.9,0.7)$
$x_3 = (-0.1,0.0,0.1,-0.1)$
$x_4 = (-0.8,-0.7,-0.1,0.2)$
$x_5 = (0.2,-0.2,0.5,-0.9)$
$x_6 = (0.4,0.8,0.8,0.9)$
You can check that the $6 \times 6$ sinc distance matrix $S$ for these points has an eigenvalue of $\approx -0.0024103$, and thus, $S$ is not positive semi definite.
Best Answer
Define $f(x, y) = \sqrt{|x + y|} - \sqrt{|x-y|}$. Then we want to prove that \begin{align*} \mathbb{E}[f(X, Y)] \ge 0 \end{align*} where $X, Y$ independent and uniformly distributed over the set of values $\{x_1, \cdots, x_n\}$.
[Remark 1: I know the IMO is motivated to use techniques only available at the secondary school level, but the probabilistic notation allows me to naturally represent a bunch of summation notations in a succinct form, and can be translated easily to the summation notation in the original problem.]
First, let's observe the region in the Cartesian plane where $f(x, y) \ge a$ and $f(x, y) \le -a$, respectively:
And therefore \begin{align*} \mathbb{P}(f(X, Y) \ge a) &= \mathbb{P}\left(X \ge \frac{a}{2}, Y \ge \frac{a}{2}\right) + \mathbb{P}\left(X \le -\frac{a}{2}, Y \le -\frac{a}{2}\right) \\ &= \mathbb{P}\left(X \ge \frac{a}{2}\right)^2 + \mathbb{P}\left(X \le -\frac{a}{2}\right)^2 \end{align*} and \begin{align*} \mathbb{P}(f(X, Y) \le a) &= \mathbb{P}\left(X \ge \frac{a}{2}, Y \le -\frac{a}{2}\right) + \mathbb{P}\left(X \le -\frac{a}{2}, Y \ge \frac{a}{2}\right) \\ &= 2\mathbb{P}\left(X \ge \frac{a}{2}\right) \left(X \le -\frac{a}{2}\right) \end{align*} Then, we need the lemma that \begin{align*} \mathbb{E}[A] = \int_0^\infty [\mathbb{P}(A \ge a) - \mathbb{P}(A \le -a)]da \end{align*}
which can be proved by noticing that $\mathbb{E}[A] = \mathbb{E}[A^+] - \mathbb{E}[A^-]$, where $A^+ = \max(A, 0)$ and $A^- = -\min(A, 0)$ and then applying the proof for the tail sum formula. Letting $A = f(X, Y)$,
\begin{align*} \mathbb{E}[f(X, Y)] &= \int_0^\infty[\mathbb{P}(f(X, Y) \ge a) - \mathbb{P}(f(X, Y) \le -a)]da \\ &= \int_0^\infty\left[\mathbb{P}\left(X \ge \frac{a}{2}\right) - \mathbb{P}\left(X \le -\frac{a}{2}\right)\right]^2da \\ &\ge 0 \end{align*}
Remark 2: Note this proof works for any $f(x, y) = g(|x+y|) - g(|x - y|)$ for non-decreasing $g: \mathbb{R}_{\ge 0} \rightarrow \mathbb{R}$ and general distributions for $X, Y$. For example, we also have the result \begin{align*} \sum_{i,j} w_i w_j \log(1 + |x_i - x_j|) \le \sum_{i,j} w_i w_j \log(1 + |x_i + x_j|) \end{align*} for weights $w_1, \cdots, w_n \ge 0$.
Remark 3: Completely forgot to answer the original problem. Note that if we choose $g(x) = \sqrt{x}$ and define $X, Y$ to be random variables distributed over $x_1, \cdots, x_n$ with weights $w_1, \cdots, w_n \ge 0$, then we effectively get, for $\mathbf{M} = [M_{ij}]_{ij}$, \begin{align*} \mathbf{w}^\intercal \mathbf{M}\mathbf{w} \ge 0 \end{align*} for $\mathbf{w} \in \mathbb{R}^n_{\ge 0}$; we need $\mathbf{w} \in \mathbb{R}^n$ in order to get genuine positive semidefiniteness. But it appears that if we replace $\mathbb{P}$ with quasiprobability distribution (e.g. negative measures are allowed), everything in the proof still seems to go through.